人與人之間的溝通,有超過60%是不用言語的,而是依靠表情和肢體動作,而機器人何嘗不能做到呢?這即是本研究想要追求的目標。本論文以感性工學為基礎,其中探討到人類心理層面的認知和人體生理學,透過生理學瞭解表情肌肉和神經之間的互動進而控制表情。使用硬體量測表情肌的肌電訊號,經過濾波、正規化處理,得到可用的參數,再以類神經網路運算、歸類,進而探討出表徵,來完成人體臉部五官的表情模擬。期盼不久的未來,機器人不再是面無表情的鋼鐵,而是擁有豐富、生動的表情,與人類親密的互動。 If one considers that more than 60% of human communication is conducted non-verbally ( by using facial expressions and gestures ) , an important research topic is how interfaces for this non-verbal communication can be developed. In the study , the purpose of constructing a kansei engineering structure is to understand the relationship among psychological cognition , physiological structure , and facial expression control. In the process , myoelectric signal that measured byActive EMG sensor becomes important parameter after filtering and normalization , and the parameter could be used to sort out the characteristics then complete the simulation of facial expression via neural network. In the future , the robot will not just metals without facial expressions.We expect the robot capable of interacting with human more closely.