博碩士論文 985202074 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:20 、訪客IP:18.221.113.108
姓名 陳柏儒(Bo-Ru Chen)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 人臉朝向偵測應用於平衡復健與人機介面
(Detection of Face Orientation and its Applications in Vestibular Rehabilitation and Human-Computer Interface)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 主動外觀模型在近年來被廣泛用於人臉偵測及人臉特徵的擷取,本論文以主動外觀模型為基礎,提出了一個新的人臉朝向追蹤的演算法,然後將以此人臉朝向的資訊應用於「平衡復健運動」及「人機互動介面」。
本論文先以主動外觀模型粗略地找出計算人臉朝向角度所需的五大人臉特徵:左右眼角、鼻尖和左右嘴角。然後利用膚色資訊再來細調原偵測到的人臉特徵點;此外,主動外觀模型在人臉角度過於偏移時會有偵測失敗的問題,本論文會使用光流演算法追蹤前一刻的特徵點,以確保人臉特徵都能被偵測到。本論文以此五個人臉特徵在影像上的相對距離及人臉的構造為原理,計算出臉部對攝影機的偏移角,進而得到人臉的朝向資訊。
首先,本論文利用此人臉朝向資訊加上遊戲互動元素,將其用於平衡復健運動中的頭部復健部份,病人不需配戴任何東西,也不需醫師陪同,可在家中以簡單的環境架設進行平衡復健運動。此外,系統會紀錄下每次使用的情形,以便日後可用來評估復健情況。本論文提出的第二種應用為頭動滑鼠,此功能可讓手部不便使用滑鼠的使用者,能透過頭部的轉動去操作電腦系統。若搭配本實驗室過去所設計的溝通輔具軟體,更可讓身障者執行打字、影音娛樂及家電控制等生活基本功能。上述兩種應用都有搭配相關實驗設計以確認它們的有效度和極限。
摘要(英) In recent years, the Active Appearance Model (AAM) has been widely applied to the detection of human faces and the extraction of facial features. Based on the AMM, this thesis proposes a flexible algorithm for tracking the facial orientations in face images. The facial orientation information is then applied to the developments of a vestibular rehabilitation exercise system and a human-computer interface.
This thesis adopts the AMM to coarsely detect five important facial features such as the far corners of the eyes, the tip of the nose, and the far corners of the mouth. These facial features are necessary for the computation of the facial orientations. Then the information of skin and non-skin regions is adopted to fine-tune the locations of the five facial features coarsely detected by the AAM. The traditional AAM cannot work when the slant of a human face is too large. The optical flow tracking method is adopted to track the features from the previous image in case the AMM model cannot find the features at the present image. The facial orientation is computed based on the geometrical analysis of these five facial features.
First of all, the facial orientation information incorporated with an interactive game is applied to the development of a vestibular rehabilitation exercise system. Patients are able to do a vestibular exercise in home without the need of wearing anything or a doctor’s accompany. In addition, this system could record corresponding evaluation parameters during an exercise. The second application of the facial orientation information is the development of a human-computer interface called a “head mouse”. The head mouse allows a person with hand malfunction to manipulate a computer simply by rotating his or her head. In addition, the head mouse incorporated with the communication aid previously developed by our laboratory can further allows people with severe disabilities to type, to surf Web, to enjoy A/V entertainments, to control home appliances, etc.
關鍵字(中) ★ 主動外觀模型
★ 人臉朝向偵測
★ 平衡復健運動
★ 光流追蹤
★ 特徵擷取
★ 人機互動介面
關鍵字(英) ★ active appearance model
★ human-computer interface
★ vestibular rehabilitation
★ detection of face orientation
★ optical flow tracking
★ feature extraction
論文目次 中文摘要 I
ABSTRACT II
誌謝 IV
目錄 V
圖目錄 VII
表目錄 X
第一章、緒論 1
1-1 研究動機 1
1-2 研究目的 2
1-3 論文架構 3
第二章、相關研究 4
2-1 平衡復健運動 4
2-2 溝通輔具系統 6
2-3 人臉偵測 9
2-4 人臉辨識 12
第三章、研究方法與步驟 16
3-1 人臉特徵偵測 17
3-1-1 Viola-Jones 人臉偵測 18
3-1-2 主動外觀模型 (AAM) 20
3-1-3 眼角校正演算法 24
3-1-4 光流追蹤演算法 27
3-1-5 人臉特徵正確性判別 30
3-2 人臉朝向偵測 32
3-3 人臉辨識 36
第四章、人機介面及復健運動應用 40
4-1 系統環境 40
4-2 系統操作流程 41
4-2-1 頭動滑鼠 42
4-2-2 輔具溝通系統 45
4-2-3 平衡復健運動 46
第五章、實驗 50
5-1 人臉特徵偵測實驗 50
5-2 人臉朝向偵測實驗 53
5-3 人機介面輸入實驗 59
5-3-1 中文輸入介面實驗 59
5-3-2 英文輸入介面實驗 61
5-4 桌面控制實驗 62
5-5 人臉辨識實驗 65
第六章、結論與未來展望 67
參考文獻 69
參考文獻 [1] T. Cawthorne, “Vestibular Injuries,” in Proc. of the Royal Society of Medicine, vol. 39, pp. 270-272, 1946.
[2] F. S. Cooksey, “Rehabilitation in Vestibular Injuries,” in Proc. of the Royal Society of Medicine, vol. 39, pp. 273-275, 1946.
[3] K. M. V. McConville and S. Virk, “Motor Learning in a Virtual Environment for Vestibular Rehabilitation,” in 3rd International IEEE/EMBS Conference on Neural Engineering, pp. 600-603, 2-5 May 2007.
[4] P. J. Sparto, J. M. Furman, S. L. Whitney, L. F. Hodges, and M. S. Redfern, “Vestibular rehabilitation using a wide field of view virtual environment,” in 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 2, pp. 4836-4839, 1-5 Sep. 2004.
[5] J. E. Deutsch, M. Borbely, J. Filler, K. Huhn, and P. Guarrera-Bowlby, “Use of a Low-Cost, Commercially Available Gaming Console (Wii) for Rehabilitation of an Adolescent With Cerebral Palsy,” Physical Therapy, vol. 88, no. 10, pp. 1196–1207, Oct. 2008.
[6] 葉錦諺,「以影像為基礎之重度身障者人機介面」,國立中央大學資訊工程研究所碩士論文,民國九十七年。
[7] 邱國鈞,「追瞳系統之研製及其應用」,國立中央大學資訊工程研究所碩士論文,民國九十五年。
[8] T. Hain. (2010, October 3). Vestibular Rehabilitation Therapy (VRT) [Online]. Available: http://www.dizziness-and-balance.com/treatment/rehab.html Jun. 23, 2011 [data accessed]
[9] C. McGibbon, D. Krebs, S. Wolf, P. Wayne, D. Scarborough, and S. Parker, “Tai Chi and vestibular rehabilitation effects on gaze and whole body stability,” Journal of Vestibular Research, vol. 14, no. 6, pp. 467-478, 2004.
[10] 黃偉順,「虛擬實境科技在中風患者肢體及平衡復健上的應用」,國立陽明大學醫學工程研究所碩士論文,民國九十年。
[11] J. G. Wang and E. Sung, “Study on eye gaze estimation,” IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 32, no. 3, pp. 332-350, Jun. 2002.
[12] Z. Zhu and Q. Ji, “Eye gaze tracking under natural head movements,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 918- 923, 20-25 Jun. 2005.
[13] L. J. G. Vazquez, A. M. Minor, and A. J. H. Sossa, “Low cost human computer interface voluntary eye movement as communication system for disabled people with limited movements,” in Health Care Exchanges, pp. 165-170, 28 Mar. 2011-1 Apr. 2011.
[14] 莊英傑,「追瞳系統之研發於身障者之人機介面應用」,國立中央大學資訊工程研究所碩士論文,民國九十三年。
[15] H. Lim and V. K. Singh, “Design of healthcare system for disable person using eye blinking,” in 4th Annual ACIS International Conference on Computer and Information Science, pp. 551- 555, 2005.
[16] R. Heishman and Z. Duric, “Using Image Flow to Detect Eye Blinks in Color Videos,” in IEEE Workshop on Applications of Computer Vision, pp. 52, Feb. 2007.
[17] B. Chambayil, R. Singla, and R. Jha, “Virtual keyboard BCI using Eye blinks in EEG,” in IEEE 6th International Conference on Wireless and Mobile Computing, Networking and Communications, pp. 466-470, 11-13 Oct. 2010.
[18] M. Nabati and A. Behrad, “Camera mouse implementation using 3D head pose estimation by monocular video camera and 2D to 3D point and line correspondences,” in 5th International Symposium on Telecommunications, pp. 825-830, 4-6 Dec. 2010.
[19] M. Betke, J. Gips, and P. Fleming, “The Camera Mouse: visual tracking of body features to provide computer access for people with severe disabilities,” IEEE Trans. on Neural Systems and Rehabilitation Engineering, vol. 10, no. 1, pp. 1-10, Mar. 2002.
[20] Camera Mouse Inc.. [Online] Available: http://www.cameramouse.org/ Jun. 23, 2011 [data accessed]
[21] R. Chellappa, C. L. Wilson, and S. Sirohey, “Human and Machine Recognition of Faces: A Survey,” in Proc. of the IEEE, vol. 83, no. 5, pp. 705-741, May 1995.
[22] W. Zhao, R. Chellappa, and A. Rosenfeld, “Face Recognition: A Literature Survey,” ACM Computing Surveys, vol. 35, no. 4, pp. 399-458, Dec. 2003.
[23] C. Garcia, G. Zikos, and G. Tziritas, “Face Detection in Color Images using Wavelet Packet Analysis,” in IEEE International Conference on Multimedia Computing and Systems, vol. 1, pp. 703-708, Jul. 1999.
[24] R. L. Hsu, M. A. Nottaleb, and A. K. Jain, “Face Detection in Color Images,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 696-706, May 2002.
[25] C. Lin and K. C. Fan, “Human Face Detection Using Geometric Triangle Relationship,” in Proc. of 15th International Conference on Pattern Recognition, vol. 2, pp. 941-944, 2000.
[26] P. Viola and M. J. Jones, “Robust real-time face detection,” in Proc. of 8th IEEE International Conference on Computer Vision, vol. 2, pp. 747, 2001.
[27] S. Pavani, D. Delgado, and A. F. Frangi, “Haar-like features with optimally weighted rectangles for rapid object detection,” Pattern Recognition, vol. 43, no. 1, pp. 160–172, 2010.
[28] T. F. Cootes, G. J. Edwards, and C. J. Taylor, "Active appearance models," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 681-685, Jun. 2001.
[29] I. Matthews and S. Baker, “Active Appearance Models Revisited,” International Journal of Computer Vision, vol. 60, no. 2, pp. 135-164, Nov. 2004.
[30] V. P. Kshirsagar, M. R. Baviskar, and M. E. Gaikwad, "Face recognition using Eigenfaces," in 3rd International Conference on Computer Research and Development, vol. 2, pp. 302-306, 11-13 Mar. 2011.
[31] R. Brunelli and T. Poggio, "Face recognition: features versus templates," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1042-1052, Oct. 1993.
[32] L. Wiskott, J. M. Fellous, N. Kruger, and C. von der Malsburg, "Face recognition by elastic bunch graph matching," in Proc. of International Conference on Image Processing, vol. 1, pp. 129-132, 26-29 Oct. 1997.
[33] X. He, S Yan, Y Hu, P. Niyogi, and H. J Zhang, “Face recognition using Laplacianfaces,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328-340, Mar. 2005.
[34] G. Friedrich and Y. Yeshurun, “Seeing people in the dark: face recognition in infrared images,” in Proc. of the Second International Workshop on Biologically Motivated Computer Visio, no. 2525, pp. 348-359, 2002.
[35] 曾郁展,「DSP-Based之即時人臉辨識系統」,國立中山大學電機工程研究所碩士論文,民國九十四年。
[36] B. K. P. Horn and B. Schunck, “Determining optical flow,” Artificial Inteligence, vol. 59,pp. 81–87, 1993.
[37] B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. of DARPA Image Understanding Workshop, pp. 121–130, 1984.
[38] J. Y. Bouguet, “Pyramidal Implementation of the Lucas-Kanade Feature Tracker,” Intel Corporation, Microprocessor Research Labs, 1999.
[39] G. Bradski and A. Kaehler, Learning OpenCV, O'Reilly Media, pp. 328, Sep. 2008.
[40] A. Gee and R. Cipolla, “Determining the Gaze of Faces in Images,” Image and Vision Computing, vol. 12, no. 10, pp. 639-647, Dec. 1994.
[41] S. C. Yeh, J. Stewart, M. McLaughlin, T. Parsons, C. Winstein, and A. Rizzo, “Evaluation approach for post-stroke rehabilitation via virtual reality aided motor training,” in Proc. of the 2007 international conference on Ergonomics and health aspects of work with computers, pp. 378-387, 2007.
[42] G. Slabaugh, (1999, Aug. 20). Computing Euler angles from a rotation matrix. [Online]. Available: http://www.gregslabaugh.name/publications/euler.pdf Jun. 23, 2011 [data accessed]
[43] 陳建隆,「應用改良式經驗模態分解法於消除文件影像中的不良光照現象」,國立中央大學資訊工程研究所碩士論文,民國九十八年。
[44] Head Mouse. [Online] Available: http://nipg.inf.elte.hu/headmouse/headmouse.html Jun. 23, 2011 [data accessed]
指導教授 蘇木春(Mu-Chun Su) 審核日期 2011-7-12
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明