博碩士論文 955202009 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:19 、訪客IP:18.219.47.239
姓名 楊淳凱(Chun-Kai Yang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於自我組織特徵映射圖之人臉表情辨識
(A SOM-based Facial Expression Recognition System)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 表情在日常生活中扮演著很重要的角色,是一種非語言的溝通方式,因此表情辨識成為許多專家學者在研究發展的議題。本論文主要是發展一套自動化的表情辨識系統,可透過擷取數位攝影機的影像,自動化地偵測人臉、擷取特徵到表情辨識。藉由人臉偵測、雙眼偵測、特徵區域的概念、特徵點的選取與光流追蹤的方法,再加入有限狀態機的機制,可以有效率且快速地組成一套自動化的表情辨識系統。
本系統採用偵測雙眼來精確的定位人臉特徵區域,提出改良式的自我組織特徵映射圖演算法,能自動化且即時地做臉部特徵點的追蹤與選取。並採用兩階段鄰域機制及相關係數光流追蹤法,提供快速地人臉特徵點的追蹤方式,藉由這些特徵點在臉部的移動判別人臉的表情。根據以上的方法,建構出本論文的表情辨識系統,在各個表情資料庫都呈現相當好的成效。最後,再結合有限狀態機的機制,將連續的序列影像自動化分割出許多表情序列影像段,讓系統可以透過數位攝影機即時判別使用者的表情動作,以達到自動化即時表情辨識系統。
摘要(英) Visual communication is very important in daily lives for humans as social beings. Especially, facial expressions can reveal lots of information without the use of a word. Automatic facial expression recognition systems can be applied to many practical applications such as human-computer interaction, stress-monitoring systems, low-bandwidth videoconferencing, human behavior analysis, etc. Thus in recent years, the research of developing automatic facial expression recognition systems has attracted a lot of attention from varied fields. The goal of this thesis is to develop an automatic facial expression recognition system which can automatically detect human faces, extract features, and recognize facial expressions. The inputs to the proposed automatic facial expression recognition algorithm are a sequence of images since dynamic images can provide more information about facial expressions than a single static image.
After a human face is detected, the system first detects eyes and then accurately locates the regions of face features. The movements of facial points (eyebrows, eyes, and mouth) have a strong relation to the information about the shown facial expression; however, the extraction of facial features sometimes is a very challenging task. A modified self-organizing feature map algorithm is developed to automatically and effectively extract feature points. Then we adopt a two-stage neighborhood-correlation optical flow tracking algorithm to track the human feature points. The optical flow information of the feature points is used for the facial expression recognition. Most importantly, a segmentation method based on a finite state machine is proposed to automatically segment a video stream into units of facial expressions. Each segmented unit is then input to the recognition module to decide which facial expression exists in the corresponding unit. Experiments were conducted to test the performance of the proposed facial recognition system.
關鍵字(中) ★ 表情辨識
★ 特徵區域
★ 特徵擷取
★ 眼睛偵測
★ 自我組織特徵映射圖
★ 光流追蹤
關鍵字(英) ★ facial expression recognition
★ facial feature region
★ facial feature
論文目次 摘 要 i
ABSTRACT ii
誌 謝 iv
目 錄 v
圖 目 錄 viii
表 目 錄 x
一、 緒論 1
1-1 研究動機 1
1-2 研究目標 3
1-3 論文架構 3
二、 相關研究 4
2-1 人臉偵測 5
2-2 特徵擷取 7
2-2-1 Image-based的特徵擷取 7
2-2-2 Video-based的特徵擷取 9
2-3 表情分類與辨識 11
三、 人臉表情辨識系統 14
3-1 人臉表情辨識系統之架構與流程 14
3-2 人臉偵測 16
3-2-1 方法介紹 17
3-2-2 樣本測試效果 20
3-3 特徵區域設定 22
3-3-1 雙眼偵測 22
3-3-2 正規化與特徵區域訂定 27
3-3-3 樣本使用效果 28
3-4 自我組織特徵映射圖應用於特徵點選取 29
3-4-1 自我組織特徵映射圖 29
3-4-2 改良版自我組織特徵映射圖應用於特徵點選取 31
3-4-3 參數設定與樣本迭代過程 35
3-5 特徵點光流追蹤與擷取特徵向量 36
3-5-1 兩階段鄰域機制與相關係數光流追蹤 36
3-5-2 取得特徵向量 39
3-5-3 參數設定與樣本追蹤過程 40
3-6 表情辨識 43
3-6-1 多層感知機 43
3-6-2 表情分類器的參數設定與表情辨識方法 45
3-7 利用有限狀態機做表情序列影像的自動分段 46
四、 實驗結果與分析 52
4-1 系統環境介紹 52
4-2 人臉表情資料庫 53
4-2-1 Cohn-Kanade AU-coded facial expression database 53
4-2-2 FEEDTUM 54
4-3 眼睛定位實驗結果及分析 55
4-4 人臉表情實驗結果與分析 57
4-4-1 本論文辨識結果 57
4-4-2 相關論文辨識結果與比較 60
4-5 自動化人臉表情辨識系統呈現與結果 63
4-6 辨識效能與評估 64
五、 結論與展望 66
5-1 結論 66
5-2 未來展望 67
參考文獻 68
參考文獻 [1] Y. Araki, N. Shimada, and Y. Shiral, “Detection of faces of various directions in complex backgrounds,” in Proc. of the 16th IEEE Int. Conf. on Pattern Recognition, Washington, Aug. 2002, vol. 1, pp. 409-412.
[2] I. Asimov, I, Robot. Grosset & Dunlap Press, 1952.
[3] J. L. Barron, D. J. Fleet, S. S. Beauchemin, and T. A. Burkitt, “Performance of optical flow techniques,” in Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1992, pp. 236-242.
[4] I. Buciu and I. Pitas, “A new sparse image representation algorithm applied to facial expression recognition,” in IEEE Workshop on Machine Learning for Signal Processing, 2004, pp. 539-548.
[5] M. Beszedes and P. Culverhouse, “Comparison of human and automatic facial emotions and emotion intensity levels recognition,” in Proc. of the 5th IEEE Int. Symposium on Image and Signal Processing and Analysis, Sep. 2007, pp. 27-29.
[6] H. Y. Chen, C. L. Huang, and C. M. Fu, “Hybrid-boost learning for multi-pose face detection and facial expression recognition,” in IEEE Int. Conf. on Multimedia and Expo, Beijing, July 2007, pp. 671-674.
[7] C. C. Chiang, W. K. Tai, M. T. Yang, Y. T. Huang, and C. J. Huang, “A novel method for detecting lips, eyes and faces in real time,” Real-Time Imaging, vol. 9, no. 4, pp. 277-287, Aug. 2008.
[8] I. Cohen, N. Sebe, F. G. Gozman, M. C. Cirelo, and T. S. Huang, “Learning Bayesian network classifiers for facial expression recognition both labeled and unlabeled data,” in Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2003, vol. 1, pp. 595-601.
[9] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 681-685, 2001.
[10] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines. Cambridge University Press, 2000.
[11] P. Ekman, Emotions Revealed: Understanding Faces and Feeling. London: Weidenfeld and Nicholson, 2003.
[12] P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality and Social Psychology, vol. 17, pp. 124-129, 1971.
[13] P. Ekman and W. V. Friesen, Unmasking the Face. New Jersey: Prentice Hall, 1975.
[14] P. Ekman and W.V. Friesen, The Facial Action Coding System: A Technique for The Measurement of Facial Movement. San Francisco: Consulting Psychologists Press, 1978.
[15] N. Esau, E. Wetzel, L. Kleinjohann, and B. Kleinjohann, “Real-time facial expression recognition using a fuzzy emotion model,” in 2007 IEEE Int. Conf. on Fuzzy Systems, July 2007, pp. 1-6.
[16] I. Essa and A. P. Pentland, “Coding, analysis, interpretation, and recognition of facial expressions,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 757-763, July 1997.
[17] B. Fasel and J. Luettin, “Automatic facial expression analysis: A survey,” Pattern Recognition, vol. 36, pp. 259-275, Sep. 2003.
[18] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, pp. 119-139, 1997.
[19] A. Gill, Introduction to the Theory of Finite-state Machines. McGraw-Hill, 1962.
[20] A. E. Golli, B. Conan-Guez, and F. Rossi, “A self organizing map for dissimilarity data,” in Proc. of IFCS 2004, Chicago, July 2004, pp. 61-68.
[21] R. C. Gonazlez and R. E. woods, Digital Image Processing. Addison-wesley, 1992.
[22] S. Hadi, A. Ali, and K. Sohrab, “Recognition of six basic facial expressions by feature-points tracking using RBF neural network and fuzzy inference system,” in Proc. of the IEEE Int. Conf. on Multimedia and Expo, 2004, vol. 2, pp. 1219-1222.
[23] R. L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, “Face detection in color images,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no.5, pp. 696-706, 2002.
[24] I. T. Jolliffe, Principal Component Analysis. Springer, 1986.
[25] T. Kanade, J. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Proc. of the Fourth IEEE Int. Conf. on Automatic Face and Gesture Recognition, France, 2000, pp. 46-53.
[26] T. Kohonen, Self-Organizing Maps. Springer-Verlag, Berlin, 1995.
[27] T. Kohonen, E. Oja, O. Simula, A. Visa, and T. Kangas, “Engineering application of the self-organizing map,” in Proc. of the IEEE, 1996, vol. 84, no.10, pp. 1358-1383.
[28] I. Kotsia, I. Buciu, and I. Pitas, “An analysis of facial expression recognition under partial facial image occlusion,” Image and Vision Computing, vol. 26, no. 7, pp. 1052-1067, July 2008.
[29] S. Kumano, K. Otsuka, J. Yamato, E. Maeda, and Y. Sato, “Pose-invariant facial expression recognition using variable-intensity templates,” in Proc. ACCV'07, 2007, vol. 4843, pp.324-334.
[30] A. Lanitis, C. J. Taylor, and T.F. Cootes, “An automatic face identification system using flexible appearance models,” Image and Vision Computing, vol. 13, no. 5, pp. 393-401, 1995.
[31] B. D. Lucas and T. Kanade, “An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 8, pp. 565-593, 1986.
[32] L. Ma and K. Khorasani, “Facial expression recognition using constructive feedforward neural networks,” IEEE Trans. on Systems, Man, and Cybernetics, vol. 34, no. 3, pp. 1588-1595, 2004.
[33] Y. Ma, X. Gu, and Y. Wang, “Contour detection based on self-organizing feature clustering,” in 2007 Int. Conf. on Natural Computation, Aug. 2007, vol. 2, pp. 221-226.
[34] E. Osuna, R. Freund, and F. Girosi, “Training support vector machines: An application to face detection,” in Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1997, pp. 130-136.
[35] T. Otsuka and J. Ohya, “Spotting segments displaying facial expression from image sequencesusing HMM,” in Proc. IEEE Conf. on Automatic Face and Gesture Recognition, Apr. 1998, pp. 442-447.
[36] M. Pantic and L. J.M. Rothkrantz, “Automatic analysis of facial expressions: The state of the art,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1424-1445, 2000.
[37] S. L. Phung, D. Chai, and A. Bouzerdoum, “Skin colour based face detection,” in The 7th Australian and New Zealand Intelligent Information Systems Conf., 2001, pp. 171-176.
[38] R. W. Picard, Affective Computing. London: The MIT Press, 1997.
[39] H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23-38, Jan. 1998.
[40] H. A. Rowley, S. Baluja, and T. Kanade, “Rotation invariant neural network-based face detection,” in Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1998, pp. 38-44.
[41] C. R. D. Silvaa, S. Ranganathb, and L. C. D. Silvac, “Cloud basis function neural network: A modified RBF network architecture,” Pattern Recognition, vol. 41, no. 1, pp. 1241-1253, 2008.
[42] M. C. Su and I. C. Liu, “Application of the self-organizing feature map algorithm in facial image morphing,” Neural Processing Letter, vol. 14, no. 1, pp. 35-47, Aug. 2001.
[43] W. Sun and Q. Ruan, “Two-dimension PCA for facial expression recognition,” in Proc. 8th Int. Conf. on Signal Processing, Nov. 2006, vol. 3.
[44] M. Suwa, N. Sugie, and K. Fujimora, “A preliminary note on pattern recognition of human emotional expression,” in Proc. Int. Joint Conf. on Pattern Recognition, Kyoto, Japan, 1978, pp. 408-410.
[45] C. Tomasi and J. Shi, “Good features to track,” in Proc. of the 9th IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Calif., 1994, pp. 593-600.
[46] P. Viola and M. J. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Dec. 2001, vol. 1, pp. 511-518.
[47] D. Vukadinovic and M. Pantic, “Fully automatic facial feature point detection using Gabor feature based boosted classifiers,” in IEEE Int. Conf. on Systems, Man, and Cybernetics, Oct. 2005, vol. 2, pp. 1692-1698.
[48] F. Wallhoff, B. Schuller, M. Hawellek, and G. Rigoll, “Efficient recognition of authentic dynamic facial expressions on the Feedtum Database,” in IEEE Int. Conf. on Multimedia and Expo, July 2006, pp. 493-496.
[49] J. Wang and L. Yin, “Static topographic modeling for facial expression recognition and analysis,” Computer Vision and Image Understanding, vol. 108, no. 1-2, pp. 19-34, Oct. 2007.
[50] P. Wanga, F. Barrettb, E. Martin, M. Milonova, R. E. Gur, R. C. Gur, C. Kohler, and R. Verma, “Automated video-based facial expression analysis of neuropsychiatric disorders,” Neuroscience Methods, vol. 168, pp. 224-238, Feb. 2008.
[51] Y. Yacoob and L. D. Davis, “Recognizing human facial expressions from long image sequences using optical flow,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 636-642, 1996.
[52] G. Yang and T. S. Huang, “Human face detection in complex background,” Pattern Recognition, vol. 27, no. 1, pp. 53-63, 1994.
[53] J. Yang, D. Zhang, A. F. Frangi, and J.Y. Yang, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 131-137, 2004.
[54] M. H. Yang, D.J. Kriegman, and N. Ahuja, “Detecting faces in images: A survey,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34-58, Jan. 2002.
[55] M. Yeasin, B. Bullot, and R. Sharma, “Recognition of facial expressions and measurement of levels of interest from video,” IEEE Trans. on Multimedia, vol. 8, pp. 500-508, June 2006.
[56] K. C. Yow and R. Cipolla, “Feature-based human face detection,” Image and Vision Computing, vol. 15, no. 9, pp. 713-735, 1997.
[57] A. Zhu and S.X. Yang, “A neural network approach to dynamic task assignment of multirobots,” IEEE Trans. Neural Networks, vol. 17, no. 5, pp. 1278-1287, Sep. 2006.
[58] C. Zhan, W. Li, F. Safaei, and P. Ogunbona, “Emotional states control for on-line game avatars,” in Proc. of the 6th ACM SIGCOMM workshop on Network and system support for games, 2007, pp. 31-36.
[59] Cohn-Kanade AU-Coded Facial Expression Database. [Online]. Available: http://vasc.ri.cmu.edu/idb/html/face/facial_expression/index.html July 2008 [date accessed]
[60] Database with Facial Expressions and Emotions from the Technical University Munich. [Online]. Available: http://www.mmk.ei.tum.de/~waf/fgnet/ July 2008 [date accessed]
[61] SONY. [Online]. Available: http://www.sony.co.jp/ July 2008 [date accessed]
[62] 楊煒達,「簡易方法之少量人臉辨識系統」,碩士論文,資訊工程研究所,國立中央大學,民國九十六年七月。
[63] 熊昭岳,「行車安全偵測系統」,碩士論文,資訊工程研究所,國立中央大學,民國九十四年七月。
[64] 謝怡竹,「以光流為基礎之自動化表情辨識系統」,碩士論文,資訊工程研究所,國立中央大學,民國九十四年七月。
[65] 蘇木春,張孝德主編,機器學習:類神經網路、模糊系統以及基因演算法則。全華科技圖書股份有限公司,臺北市,民國八十六年。
指導教授 蘇木春(Mu-Chun Su) 審核日期 2008-7-21
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明