博碩士論文 955201074 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:53 、訪客IP:18.222.162.14
姓名 余建良(Chien-Liang Yu)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 互動雙足式機器人之設計與實現(III)互動演算法執行
(The design and realization of interactive biped robots (III) execution of interactive algorithm for two robots)
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本研究「互動雙足式機器人之設計與實現」是由三位同學合力完成,共分為(A)手勢辨識、(B)雙足式機器人控制以及(C)互動演算法執行三個部分,而本論文是針對(C)項目做研究。
為設計與控制一對雙足式機器人完成互動合作的表演,並結合手勢辨識告知機器人該執行的任務,本研究中嘗試設計出不同的演算法,使機器人得以完成互動的目的。研究中的兩台機器人,分別命名為主導(Master)機器人與從屬(Slave)機器人,主導機器人的頭部裝有無線攝影機,具備視覺能力;從屬機器人無攝影機,但有裝置紅外線感測器,能量測機器人與物體之間的距離,兩機器人之間並具有無線通訊的能力。
在本論文中,主導機器人利用無線攝影機擷取欲搬運物體之影像資訊,並運用所設計之獨力搬運演算法辨識目標位置,獨力運送物體至指定目標;從屬機器人的身體上貼有不同顏色的色塊,可讓主導機器人辨識其位置並透過不同演算法,使從屬機器人接受其指揮共同完成互動動作包含:1.兩機器人互相走近並握手、2.兩機器人接力搬運物體至目的地放下、3.兩機器人合力搬運物體至目的地放下。機器人與物體的相對位置判定除了依賴主導機器人的機器視覺外,還需要搭配從屬機器人身上的紅外線感測器為輔助,可使兩機器人的定位更為準確。在互動過程中,利用機器視覺配合紅外線感測器再加上無線通訊,賦予機器人服從命令、自我修正以及合作搬運的能力是本論文的主要成就。
摘要(英) The study work “The design and realization of interactive biped robots” was completed by three members. Three members accomplish the following three tasks, respectively, (A) gesture recognition, (B) basic motions’ control of biped robots, and (C) execution of interactive algorithm for two robots. This thesis focuses on the part (C) execution of interactive algorithm for two robots.
The goal of this research is to design and control a pair of biped robots such that the two robots can cooperate with each other. Humans can command the robots to do some motions by certain of hand gestures. For the mentioned two robots, one is called “Master” and the other is “Slave”. A wireless camera is installed on the top of Master robot as its eye. On the other hand, Slave robot has not vision capability without camera, but it can measure the distance between itself and the object to be carried by its infrared ray sensor. Furthermore, two robots are communicate each other by wireless communication.
In this thesis, Master robot can use its camera to recognize the positions of the object to be carried and the desired destination, and then transport the object from the initial position to the destination. Based on the color marks pasted on Slave robot, Master robot can direct Slave robot to accomplish the following interactive motions. 1. Two robots walk close to each other and shake hands; 2. Slave robot passes the object to Master robot, then Master robot moves the object to the destination; and 3. Two robots carry the object together and transport it to the destination. During their interactive motions, the relative positions between two robots or between the robot and object depend on not only the camera of Master robot but also the infrared ray sensor of Slave robot, such that the interactive motions will be much more accurate. The major achievement of this thesis is that robot vision, infrared ray sensor and wireless communication are integrated to control the robots obeying instruction and working together.
關鍵字(中) ★ 機器人視覺
★ 互動
★ 雙足機器人
關鍵字(英) ★ robot vision
★ interactive
★ biped robot
論文目次 摘要........................................................................................................................i
Abstract..................................................................................................................ii
誌謝......................................................................................................................iv
目錄.......................................................................................................................v
圖目錄................................................................................................................viii
表目錄..................................................................................................................xi
第一章 緒論..........................................................................................................1
1.1研究背景與動機..............................................................................................1
1.2文獻回顧..........................................................................................................1
1.3論文目標..........................................................................................................2
1.4本文架構..........................................................................................................3
第二章 系統架構與實驗環境..............................................................................4
2.1機器人機構設計與硬體配置..........................................................................4
2.2系統架構..........................................................................................................8
2.2.1機器人端系統架構.......................................................................................8
2.2.2電腦端系統架構.........................................................................................10
2.3實驗環境簡介................................................................................................10
2.4開發環境與人機介面....................................................................................11
2.5系統流程........................................................................................................14
第三章 機器人之互動演算法............................................................................15
3.1封包格式........................................................................................................15
3.1.1控制封包.....................................................................................................15
3.1.2動作模式.....................................................................................................16
3.1.3回傳封包.....................................................................................................17
3.2影像辨識........................................................................................................18
3.2.1雜訊去除.....................................................................................................18
3.2.2色塊資訊擷取.............................................................................................19
3.2.3連通物件法.................................................................................................20
3.3獨力搬運演算法............................................................................................22
3.3.1目標搜尋.....................................................................................................22
3.3.2搬運目標定位.............................................................................................26
3.4指揮握手演算法............................................................................................31
3.5接力搬運演算法............................................................................................36
3.6合作搬運演算法............................................................................................37
3.6.1方向校正.....................................................................................................37
3.6.2物體距離偵測.............................................................................................42
3.6.3合作搬運演算法.........................................................................................45
第四章 實驗成果................................................................................................46
4.1獨力搬運之實驗結果....................................................................................46
4.2指揮握手之實驗結果....................................................................................48
4.3接力搬運之實驗結果....................................................................................50
4.4合作搬運之實驗結果....................................................................................55
4.5分析討論........................................................................................................60
4.6各演算法實際動作截圖................................................................................61
第五章 結論與未來展望....................................................................................65
5.1結論................................................................................................................65
5.2未來展望........................................................................................................66
參考文獻.............................................................................................................67
參考文獻 [1] N. Kohl and P. Stone, “Policy gradient reinforcement learning for fast quadrupedal locomotion,” The Proceedings of IEEE International Conference on Robotics and Automation, New Orleans, LA, April 26-May 1, 2004, pp.2619-2624.
[2] L. Geppert, “Qrio, the robot that could,” IEEE Spectrum, vol. 41, pp.34-37, 2004.
[3] M. Veloso, N. Armstrong-Crews, S. Chernova, E. Crawford, C. McMillen, M. Roth, and D. Vail, “A team of humanoid game commentators,” The Proceedings of IEEE-RAS International Conference on Humanoid Robots, Genoa, Italy, December 4-6, 2006, pp.228-233.
[4] J. Chestnutt, M. Lau, G. Cheung, J. Kuffner, J. Hodgins, and T. Kanade, “Footstep planning for the Honda ASIMO humanoid,” The Proceedings of IEEE International Conference on Robotics and Automation, Barcelona, Spain, April 18-22, 2005, pp629-634.
[5] C. H. Lin, H Andrian, Y. Q. Wang, and K. T. Song, “Personal assistant robot,” The Proceedings of IEEE International Conference on Mechatronics, Taipei, Taiwan, July 10-12, 2005, pp.434-438
[6] S. H. Hyon, J. G. Hale, and G. Cheng, “Full-Body compliant human–humanoid interaction: balancing in the presence of unknown external forces,” IEEE Transactions on Robotics, vol.23, pp.884-898, 2007.
[7] T. Moridaira, A. Miyamoto, S. Shimizu, Y. Kawanami, K. Nagasaka, and Y. Kuroki, “The interactive motion control for a small humanoid robot,” The Proceedings of IEEE-RAS International Conference on Humanoid Robots, December 5-7, 2005, pp.463-468
[8] K. T. Tseng, W. F. Huang, and C. H. Wu, “Vision-based finger guessing game in human machine interaction,” The Proceedings of IEEE International Conference on Robotics and Biomimetics, Kunming, China, December 17-20, 2006, pp.619-624.
[9] Y. Inoue, T. Tohge, and H. Iba, “Cooperative transportation system for humanoid robots using simulation-based learning,” Applied Soft Computing Journal, vol.7, pp.115-125, 2007.
[10] H. Kwon, Y. Yoon, J. B. Park, and A. C. Kak, “Person Tracking with a Mobile Robot using TwoUncalibrated Independently Moving Cameras,” The Proceedings of IEEE International Conference on Robotics and Automation Barcelona, April 18-22, 2005, pp.2877-2883.
[11] S. Nakaoka, A. Nakazawa, F. Kanehiro, K. Kaneko, M. Morisawa, and K. Ikeuchi, “Task model of lower body motion for a biped humanoid robot to imitate human dances,” IEEE/RSJ International Conference on Intelligent Robots and Systems, August 2-6, 2005, pp.3157-3162.
[12] X. Zhao, Q. Huang, Z. Peng, and K. Li, “Kinematics mapping and similarity evaluation of humanoid motion based on human motion capture,” IEEE/RSJ International Conference on Intelligent Robots and Systems, September 28-October 2, 2004, vol. 1, pp.840-845.
[13] 周立柏(王文俊教授 指導), 具模仿能力之智慧型雙足機器人之設計與實現, 碩士論文, 國立中央大學電機工程學系, 2006.
[14] X. Yin and X. Zhu, “Hand Posture Recognition in Gesture-Based Human-Robot Interaction,” IEEE Conference on Industrial Electronics and Applications, May 24-26, 2006, pp.1-6.
[15] 陳慶逸及林柏辰, VHDL數位電路實習與專題設計, 文魁資訊股份有限公司, 2004.
[16] 蔡孟凱、雷穎傑、黃昭維、陳錦輝及陳正凱, C++ Builder 6 完全攻略, 金禾資訊, 2003.
[17] R. C. Gonzalez and R. E. Woods 著, 繆紹綱 譯, 數位影像處理, 台灣培生教育出版: 普林斯頓國際, 2003.
[18] 陳翔傑 (王文俊教授 指導), 自動化車牌辨識系統設計, 碩士論文, 國立中央大學電機工程學系, 2005.
指導教授 王文俊(Wen-June Wang) 審核日期 2008-6-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明