博碩士論文 104521101 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:104 、訪客IP:3.144.230.158
姓名 林宜臻(Janice Lin)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於深度學習之戶外導航機器人
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文實現一個在戶外自主行走及導航的機器人系統。整體架構以嵌入式開發版Jetson TX1為主控核心,搭配一台攝影機與手機作為控制依據,並且結合深度學習、影像處理及馬達控制等技術,實現以模糊控制為基礎的機器人系統。
在控制流程上,首先利用深度學習技術來辨識攝影機的影像,辨識後機器人可以分辨道路與障礙物。在道路辨識方面,其辨識效果不會因為不同光線或道路色彩不一而影響。在辨識障礙物方面,其可以是道路上常出現的物件,如人、車等等,且不需要有特定的特徵。另外,本論文以自行撰寫的手機APP對機器人導航,利用手機的GPS與電子羅盤感測器,取得機器人的經緯度位置及方向角做為控制的依據,再結合Google Maps API進行全域性的路線規劃,使機器人得知欲行走的方向及路線。最後,將這些資訊計算出機器人行走的導引軌跡,使機器人可以因應即時的路況並依照規劃的路線行走。根據導引軌跡,設計直走與轉彎的模糊控制器及左右旋轉機制控制馬達,以完成整體機器人的控制系統。
使用者可以使用手機APP自行選擇目的地,機器人會根據APP所規劃的路線以及導引軌跡的資訊,自動行走在馬路上並抵達使用者所指定的地方。
摘要(英) An outdoor automatic driving and navigation robot system is achieved in the thesis. The control system is implemented in an embedded development board Jetson TX1, along with a camera and a smartphone. Advanced technology such as deep learning, image processing, and motor control are combined to implement fuzzy-based robot system.
At the beginning of control flow, deep learning is utilized to analyze the images recorded by camera, so the robot is able to find the road regions and distinguish ordinary objects such as people and cars; particular characteristics are not required. Furthermore, custom smartphone application utilizes GPS and electronic compass sensors to get the position and direction of the robot as two information for navigation. Then, combining with Google Maps API, the smartphone application provides global route planning to the robot. Finally, navigation trajectory is computed via the combination of image recognition by deep learning and navigation information by smartphone application; therefore, robot is able to deal with traffic condition immediately and walk along the planned path. Fuzzy controllers for going straight and turning based on navigation trajectory are designed to complete entire robot control system.
Users could select destination via smartphone application; robot would automatically arrive demanded place according to the route defined by application and the result of deep learning after image processing.
關鍵字(中) ★ 深度學習
★ Google Maps API
★ 機器人導引
★ 模糊控制
關鍵字(英) ★ deep learning
★ Google Maps API
★ robot navigation
★ fuzzy control
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 viii
第一章 緒論 1
1.1 研究背景與動機 1
1.2 文獻回顧 1
1.3 論文目標 3
1.4 論文架構 3
第二章 系統架構與硬體介紹 4
2.1 系統架構 4
2.2 機器人硬體架構 4
第三章 深度學習辨識與手機導航系統 8
3.1 深度學習辨識 8
3.1.1 訓練資料 8
3.1.2 網路架構 9
3.2 手機導航 13
第四章 機器人導引與控制 17
4.1 影像前處理 17
4.2 導引軌跡 20
4.2.1 直走 20
4.2.2 轉彎 21
4.2.3 避障 22
4.2.4 到達目的地 23
4.3 馬達控制 23
4.3.1 直走轉彎模糊控制器 23
4.3.2 左右旋轉控制 26
4.4 系統控制流程 26
第五章 實驗結果 28
5.1 深度學習辨識 28
5.2 手機導航系統使用介紹 30
5.3 機器人控制實測 32
5.3.1 右轉測試 32
5.3.2 左轉 34
5.3.3 直走測試 36
5.3.4 避障測試 41
第六章 結論與未來展望 47
6.1 結論 47
6.2 未來展望 47
第七章 參考文獻 49
參考文獻 [1] C. H. Chao, B. Y. Hsueh, M. Y. Hsiao, S. H. Tsai, and T. H. S. Li, "Fuzzy target tracking and obstacle avoidance of mobile robots with a stereo vision system," International Journal of Fuzzy Systems, vol. 11, no. 3, 2009, pp. 183-191.
[2] K. Watanabe, T. Kato, and S. Maeyama, "Obstacle avoidance for mobile robots using an image-based fuzzy controller," 2013 IEEE Annual Conference of Industrial Electronics Society, Vienna, 2013, pp. 6392-6397.
[3] C. K. Chang, C. Siagian, and L. Itti, "Mobile robot monocular vision navigation based on road region and boundary estimation," 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, 2012, pp. 1043-1050.
[4] D. C. Hernandez, V. D. Hoang, A. Filonenko, and K. H. Jo, "Vision-based heading angle estimation for an autonomous mobile robots navigation," 2014 IEEE International Symposium on Industrial Electronics (ISIE), Istanbul, 2014, pp. 1967-1972.
[5] A. Chand, "Navigation strategy and path planning for autonomous road crossing by outdoor mobile robots," 2011 IEEE International Conference on Advanced Robotics (ICAR), Tallinn, 2011, pp. 161-167.
[6] M. Y. Ju and J. R. Lee, "Vision-based mobile robot navigation using active learning concept," 2013 IEEE International Conference on Advanced Robotics and Intelligent Systems, Tainan, 2013, pp. 122-129.
[7] Y. Nie, Q. Chen, T. Chen, Z. Sun, and B. Dai, "Camera and lidar fusion for road intersection detection," 2012 IEEE Symposium on Electrical & Electronics Engineering (EEESYM), Kuala Lumpur, 2012, pp. 273-276.
[8] D. Fernandez and A. Price, "Visual detection and tracking of poorly structured dirt roads," 2005 IEEE International Conference on Advanced Robotics, Seattle, WA, 2005, pp. 553-560.
[9] T. Kinattukara and B. Verma, "Wavelet based fuzzy clustering technique for the extraction of road objects," 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Istanbul, 2015, pp. 1-7.
[10] I. K. Somawirata and F. Utaminingrum, "Road detection based on the color space and cluster connecting," 2016 IEEE International Conference on Signal and Image Processing (ICSIP), Beijing, 2016, pp. 118-122.
[11] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640-651, 2017.
[12] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[13] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs," arXiv preprint arXiv:1606.00915, 2016.
[14] V. Badrinarayanan, A. Kendall, and R. Cipolla, "Segnet: A deep convolutional encoder-decoder architecture for scene segmentation," arXiv preprint arXiv:1511.00561, 2015.
[15] F. Yu and V. Koltun, "Multi-scale context aggregation by dilated convolutions," arXiv preprint arXiv:1511.07122, 2015.
[16] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, "Enet: A deep neural network architecture for real-time semantic segmentation," arXiv preprint arXiv:1606.02147, 2016.
[17]伺服馬達SmartMotor相關網站,https://www.animatics.com/, 2017年6月。
[18] G. J. Brostow, J. Fauqueur, and R. Cipolla, "Semantic object classes in video: A high-definition ground truth database," Pattern Recognition Letters, vol. 30, no. 2, 2009, pp.88-97.
[19] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 2818-2826.
[20] C. Szegedy et al., "Going deeper with convolutions," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1-9.
[21]Google Maps API相關網站,https://code.google.com/apis/maps/,2017年6月。
[22]Android studio相關網站,https://developer.android.com/studio/,2017年6月。
[23] R. Laganiere, "Compositing a bird′s eye view mosaic," 2000 Conference on Vision Interface, Montreal, Canada, 2000, pp.382-387.
[24] D. York, "Least-squares fitting of a straight line," Canadian Journal of Physics, vol. 44, no. 5, 1966, pp. 1079-1086.
[25]王文俊,認識 Fuzzy-第三版,全華科技圖書股份有限公司,2008 年 6 月。
指導教授 王文俊(Wen-June Wang) 審核日期 2018-1-22
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明