博碩士論文 102521068 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:59 、訪客IP:3.140.185.194
姓名 黃翊銘(Yi-ming Huang)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 機器人頭部機構設計控制與視覺功能實現
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文與另一位同學之論文[1]合作完成一個靈活的機器人頭,並賦予機器人頭聽覺、觸覺與視覺,是一個娛樂性與互動性兼具的機器人頭。為實現靈活的頭部機構,我們採用微型伺服機及連桿式結構來控制可動點,如眼睛、眼皮、眉毛與嘴巴部位的變化,讓機器人擁有人類般的表情變化,也在機器人底部設計一個轉動平台,讓攝影機可以隨使用者轉動,並與[1]所設計之頸部機構結合,完成機器人頭。在系統方面機器人以個人電腦平台開發, 使用Kinect的影像資訊實現機器人視覺,而影像處理目的包括人臉辨識與手勢辨識,讓系統根據使用者手勢給予正確的回應,人臉辨識結果可讓機器人自動追蹤人臉,接著本系統運用人臉區域的深度資訊來建立手勢的深度範圍,並結合膚色偵測得到手勢前景,最後擷取手勢特徵,如手指位置、長度和數量,以及手掌心位置和半徑,以指揮機器人做出該有的動作。本論文在機器人功能方面設計拍照與網路相簿流程,以伺服器架設網頁相簿並設有密碼鎖,可讓使用者方便取得相簿也保有個人隱私。最後,本機器人可以手勢執行四個主要功能: 1)照相與網路相簿功能; 2)音樂歌唱功能[1]; 3)語音辨識與說話功能[1];4)觸摸與拍打回應功能[1]。運用簡單的手勢流程可讓使用者快速地熟悉功能,提升使用者與機器人的互動體驗。
摘要(英) This study proposes a nimble robot head with auditory, tactual and vision senses. Accomplished with another researcher [1], the robot head has the functions of entertainments and interaction to peoples. To implement the robot nimbleness, the mechanism of head employs a linkage structure and servo motors to establish the movement points. The movement points are eyelid, eye, eyebrow, and mouth. Controlling the movement points can present different facial expressions. The robot has the turntable mechanism in the bottom, such that the camera, which is set on the table, can follow the user. The head mechanism is combined with neck mechanism designed in [1]. As for the system aspect, this robot is developed on PC base. Using Kinect sensor which captures color and depth information of the user is helpful to complete the vision of robot. The robot vision can recognize the face and gestures of the user, and make the robot give correct responses correspondingly. According to the recognized face position, the robot can follow and see the face of the user. In order to recognize the correct gesture, we defined a region of interest (ROI). In ROI, the image process including skin color detection and position/ length of fingers, position/ radius of palm finding can recognize the gestures of the user, so that the robot head can understand the commands from the user and do the correct corresponding motions. The photo taking function is as follows. After taking photos, the user can find the photos from the cloud album conveniently and secretly based on the code on the LCD. In summary, according to the gestures, the robot can perform four main functions for the robot system: (1) photo taking, (2) song singing [1], (3) speech recognition and dialogue [1] and (4) response for the touching and beating [1]. With the simple gestures, the user can easily operate the function rapidly, and it also improves the interaction of robot.
關鍵字(中) ★ 機器人頭
★ 機器人情緒
★ 人臉辨識
★ 手勢辨識
關鍵字(英) ★ robot head
★ robot emotion
★ face recognition
★ gesture recognition
論文目次 目錄

摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 ix
第一章 1
1.1 研究背景與動機 1
1.2 文獻回顧 1
1.3 論文目標 3
1.4論文架構 3
第二章 4
2.1 系統架構 4
2.2 硬體架構 6
2.2.1 電腦設備 14
2.2.2 Kinect介紹 15
2.3機構設計 17
2.3.1眼球機構 18
2.3.2眼皮機構 19
2.3.3眉毛機構 20
2.3.4嘴巴機構 21
2.3.5頭部保險桿機構 21
2.3.6轉盤機構 22
2.3.7頸部機構設計 22
第三章 24
3.1人臉辨識 24
3.2手勢辨識的影像前處理 28
3.2.1 整合深度資訊與彩色資訊 29
3.2.2 手勢深度門檻值設定 30
3.2.3 膚色判斷與形態學處理 33
3.3手勢辨識 35
3.3.1輪廓與掌心位置、半徑定義 36
3.3.2手指的特徵搜尋 36
3.3.3手勢的特徵與分類 38
第四章 44
4.1機器人表情控制 44
4.2人臉追蹤功能 47
4.3以手勢設計功能流程 51
第五章 58
5.1 實驗環境簡介 58
5.2 手勢辨識成果 58
5.3人臉追蹤成果 61
5.4照相與網路相簿功能成果 61
第六章 66
6.1結論 66
6.2未來展望 66
參考文獻 67

參考文獻 [1] 劉敦義(王文俊教授指導),機器人頸部機構設計控制、觸覺與語音功能實現,國立中央大學電機工程學系碩士論文,2015年,6月。
[2] 日本照護機器人Robear 相關網站,http://www.theverge.com/2015/4/28/8507049/robear-robot-bear-japan-elderly,2015年6月。
[3] 服務形機器人Pepper 相關網站,
https://www.aldebaran.com/en/a-robots/who-is-pepper,2014年6月。
[4] Amazon 搬運機器人Kiva 相關網站,http://www.kivasystems.com/,2011年5月。
[5] MIT Media Lab – Personal Robot Group http://robotic.media.mit.edu/projects/robots/mds/overview/overview.html,2008年
[6] K. Itoh, H. Miwa, Y. Nukariya, et al., “Development of face robot to express the facial features,” in Proceeding of the 2004 IEEE International Workshop on Robot and Human Interactive Communication, Okayama, Japan, Sep. 2004, pp. 347-352.
[7] W. G. Wu, Q. M. Meng, Y. Wang, “Development of the humanoid head portrait robot system with flexible face and expression, ” in Proceedings of the 2004 IEEE International Conference on Robotics and Biomimetics, Shenyang, China, Aug. 2004, pp. 718-723.
[8] Y. Takahashi and H. Sato, “Compact robot face with simple mechanical components,” in Proceedings of IEEE International Conference on Control Automation and Systems, Gyeonggi-do, Korea, Oct. 2010, pp. 27-30.
[9] F. Nori, L. Jamone, G. Metta, and G. Sandini, “Accurate control of a human-like tendon-driven neck, ” in Proceedings of IEEE International Conference on Humanoid Robots, Pittsburgh, USA, Nov. 2007, pp. 371-378.
[10] M. Fumagalli, L. Jamone, G. Metta, L. Natale, F. Nori, A. Parmiggiani, M. Randazzo, and G. Sandini, “A force sensor for the control of a human-like tendon driven neck,” in Proceedings of IEEE International Conference on Humanoid Robots, Paris, France, Dec. 2009, pp. 478-485.
[11] D. M. Brouwer, J. Bennik, J. Leideman, et al., “Mechatronic design of a fast and long range 4 degrees of freedom humanoid neck,” in Proceedings of IEEE International Conference on Robotics and Automation, Kobe, Japan, May 2009, pp. 574-579.
[12] K. Kaneko, F. Kanehiro, M. Morisawa, K. Miura, S. Nakaoka and S. Kajita “Cybernetic human HRP-4C,” in Proceedings IEEE/RSJ International Conference on Humanoid Robots, Paris, France, Dec 2009, pp.7-14.
[13] S. Kajita, T. Nakano, M. Goto, Y. Matsusaka, S. Nakaoka, and K. Yokoi, “VocaWatcher: natural singing motion generator for a humanoid robot,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, San Francisco, USA, Sept. 2011, pp.2000-2007.
[14] M. Michalowski, R. Simmons, and H. Kozima, “Rhythmic attention in child-robot dance play,” in Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, Sept-Oct 2009, pp. 816-821.
[15] E. Avrunin, J. Hart, A. Douglas, and B. Scassellati, “ Effects related to synchrony and repertoire in perceptions of robot dance, ” in Proceedings of the 6th International Conference on Human-Robot Interaction, Lausanne, Switzerland, Mar. 2011, pp. 93-100.
[16] X. Yin and X. Zhu, “Hand posture recognition in gesture-based human-robot interaction,” in Proceedings of 1st IEEE Conference on Industrial Electronicsand Applications, Singapore, May 2006, pp. 1–6.
[17] E. Ohn-Bar and M. M. Trivedi, “Hand gesture recognition in real-time for automotive interfaces: A multimodal vision-based approach and evaluations,” in Proceedings of IEEE Intelligent Transportation Systems Society, Volume 15, Issue 6, Dec. 2014, pp.2368-2377.
[18] 黃仕翰 (王文俊教授指導),手勢及聲音搖控家電技術,國立中央大學電機系,碩士論文,2011年6月。
[19] H. S. Yeo, B. G. Lee, H. L.,“ Hand tracking and gesture recognition system for human-computer interaction using low-cost hardware,” in Multimedia Tools and Applications , Volume 74, Issue 8, April 2015, pp. 2687-2715.
[20] Robotis首頁,http://www.robotis.com/xe/,2014年9月。
[21] Arduino Mega 相關網站,http://arduino.cc/en/Main/arduinoBoardMega,2015年6月。
[22] WS2811 相關網站,http://www.adafruit.com/datasheets/WS2811.pdf,2015年6月。
[23] MPR121 相關網站,https://www.sparkfun.com/datasheets/Components/ MPR121.pdf,2015年6月。
[24] Tower Pro Mg90s伺服機 資料規格書,http://www.electronicoscaldas.com/datasheet/MG90S_Tower-Pro.pdf
[25] LMC-SSC2A16-01 相關網站,http://www.100y.com.tw/pdf_file/SDEC_LMC-SSC2A16DLYY-E01.pdf,2015年6月。
[26] David Catuhe,寫給專業開發者用的Windows Kinect SDK 技術手冊,博碩文化股份有限公司,2013。
[27] 王森,KINECT體感程式設計入門,碁峯資訊股份有限公司,2012年。
[28] J. A. M. Basilio, G. A. Torres, G. S. Pérez, L. K. T. Medina and H. M. P. Meana, “Explicit image detection using YCbCr space color model as skin detection”, in Proceedings of the 2011 American conference on applied mathematics and the 5th WSEAS international conference on Computer engineering and applications, Jan. 2011, pp. 123–128.
[29] OpenCV 官方教學網站 http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#result
[30] P. Viola and M. Jones, “Robust real-time face detection,” in Proceedings of International Journal of Computer Vision, Volume. 57, Issue. 2, 2004, pp. 137-154.
[31] Y. Freund and R. E. Schapire. “A decision-theoretic generalization of on-line learning and an application to boosting,” in Proceedings of Journal of Computer and System Sciences 55th, Aug. 1997, pp.119-139.
[32] Kinect Studio 相關網站,
http://msdn.microsoft.com/en-us/library/hh855389.aspx,2014 年7 月。
[33] S. Suzuki, K. Abe, “Topological structural analysis of digitized binary images by border following,” Computer Vision, Graphics, and Image Processing, Volume 30, Issue 1, April 1985, pp. 32–46.
[34] C. Manresa, J. Varona, R. Mas, and F. J. Perales, “Hand tracking and gesture recognition for human computer interaction,” Electron Letters on Computer Vision and Image, Analysis 5, Issue 3, 2005, pp. 96–104
指導教授 王文俊(Wen-june Wang) 審核日期 2015-8-19
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明