博碩士論文 106521082 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:314 、訪客IP:3.129.247.196
姓名 邱文欣(Wen-Hsin Chiu)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於深度學習之單眼距離估測與機器人戶外行走控制
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文設計並改良文獻[1]戶外導盲機器人的行走導引與避障功能,以幫助視障人士在戶外行走更為可靠。首先,使用者在手機上點選目的地,手機端透過Google map API規劃行徑路徑,藉由機器人當下與目的地的距離及偏航角,判斷並傳送直走、轉彎或停止的導航指令給主控制電腦。主控制電腦經由網路攝影機擷取影像,以語意分割技術辨識可行走的馬路區域,再利用深度學習技術對障礙物產生視差估測,將估測出的視差經由倒數方程式轉換為深度,再透過直方圖決定障礙物距離,此論文的距離估測,在0.8公尺到4公尺之間,約有80%的準確度。當辨識出可行走的馬路區域後,使用霍夫直線法畫出可行走之馬路右邊邊界線,並將道路區域劃分為多個區塊,每個區塊代表一段路徑,經由實驗找出適合的軌跡點。再利用模糊控制計算機器人左右輪的角速度,使機器人沿著軌跡點行進。但是行進中必須要有避障功能,所以上面所言深度學習加上視差轉換的方法,從網路單一攝影機的拍攝影像推算出0.8公尺到4公尺之間的障礙物距離。若有障礙物位在影像中心範圍內且距離機器人小於3.5公尺時,機器人會進行避障動作。若障礙物突然出現於前面1公尺範圍內時,機器人將停止移動,直到前方1公尺內沒有障礙物再繼續行走。藉由在戶外道路的實驗驗證,障礙物距離估測的準確率比[1]的結果提升,機器人行進控制也比[1]的方法更為穩定,使得導盲機器人能更準確且安全地到達目的地。
摘要(英) This thesis designs and improves the functions of moving guidance and obstacle avoidance for the guided robot from reference[1] such that the robot can be helpful to the blind much more in his/her daily life. First, the user clicks the destination on the cell phone, then the phone can plan the moving path for the robot by using Google map. According to the current position, attitude of the robot and the destination position, the phone will send the navigation command to the computer on the robot. This robot just uses one webcam to capture the image ahead, by using the semantic segmentation method and deep learning network, we can find the accessible road area and predict the disparity of the obstacle ahead of the robot, respectively. Based on the disparity and an inverse function, the depth map of the obstacle is obtained and the distance between the robot and obstacle is estimated from the analysis of the depth histogram. In this study, the distance from 0.8m to 4m can be estimated with 80% accuracy. When the accessible area is obtained, Hough line is created to present the border of the road at the right side of the robot. Let the accessible area of the road ahead of the robot be divided to several rectangular squares. Since the robot is forced to move along the right side of the road, then we can find the trajectory point in each square. By using fuzzy control technique, the speeds of both wheels are adjusted such that the robot can move following the trajectory points. Based on the above distance estimation for obstacles, when the obstacle is on the center of image and its estimated distance is about 3.5 m, the robot will start to avoid the obstacle; but it will stop when the obstacle suddenly appears 1m ahead, then it will move until the obstacle disappears. According to the outdoor experiment in NCU campus, the obstacle distance estimation is more accurate and the robot moving control is much more stable than that in [1] such that the robot can guide the blind reaching the destination safely and accurately.
關鍵字(中) ★ 機器人控制
★ 避障控制
★ 深度學習
★ 單眼深度估測
關鍵字(英) ★ Robot control
★ Obstacle avoidance control
★ Deep learning
★ Monocular depth prediction
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 ix
第一章 緒論 1
1.1 研究動機與背景 1
1.2 文獻回顧 1
1.3 論文目標 3
1.4 論文架構 4
第二章 系統架構與硬體介紹 5
2.1 系統架構 5
2.2 硬體架構 7
2.2.1 機器人端 7
2.2.2 手機端 11
2.2.3 深度學習訓練資料蒐集之雙眼攝影機 11
第三章 深度學習之單眼距離估測 13
3.1 估測單張影像的深度學習網路架構 13
3.2 深度學習網路的訓練資料 15
3.3 視差轉深度的方程式 16
3.4 障礙物深度估測的計算方法 17
第四章 機器人路徑規劃與避障控制 20
4.1 手機導航 20
4.1.1 兩點經緯度間的距離 22
4.1.2 兩點經緯度間的角度 22
4.1.3 導航控制 22
4.2 機器人路徑規劃 24
4.2.1 靠右行走的參考線 25
4.2.2 直走 29
4.2.3 轉彎 35
4.2.4 避障 35
4.3 機器人移動之模糊控制 38
第五章 實驗結果 40
5.1 深度估測 40
5.2 機器人控制 44
5.2.1 手機導航 44
5.2.2 避障 46
5.2.3 直走 48
5.2.4 左轉 51
5.2.5 右轉 52
第六章 結論與未來展望 53
6.1 結論 53
6.2 未來展望 53
參考文獻 55
參考文獻 [1] 賴怡靜, "基於深度學習之距離估測與自動避障的戶外導航機器人," 碩士, 電機工程學系, 國立中央大學, 桃園縣, 2018.
[2] K. Karsch, C. Liu, and S. B. Kang, "Depth transfer: Depth extraction from video using non-parametric sampling," IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 11, pp. 2144-2158, 2014.
[3] D. Eigen, C. Puhrsch, and R. Fergus, "Depth map prediction from a single image using a multi-scale deep network," in Advances in neural information processing systems, 2014, pp. 2366-2374.
[4] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab, "Deeper depth prediction with fully convolutional residual networks," in 2016 Fourth international conference on 3D vision (3DV), 2016: IEEE, pp. 239-248.
[5] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[6] J. Zbontar and Y. LeCun, "Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches," Journal of Machine Learning Research, vol. 17, no. 1-32, p. 2, 2016.
[7] N. Mayer et al., "A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4040-4048.
[8] J. Xie, R. Girshick, and A. Farhadi, "Deep3d: Fully automatic 2d-to-3d video conversion with deep convolutional neural networks," in European Conference on Computer Vision, 2016: Springer, pp. 842-857.
[9] J. Flynn, I. Neulander, J. Philbin, and N. Snavely, "Deepstereo: Learning to predict new views from the world′s imagery," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5515-5524.
[10] R. Garg, V. K. BG, G. Carneiro, and I. Reid, "Unsupervised cnn for single view depth estimation: Geometry to the rescue," in European Conference on Computer Vision, 2016: Springer, pp. 740-756.
[11] C. Godard, O. Mac Aodha, M. Firman, and G. Brostow, "Digging into self-supervised monocular depth estimation," arXiv preprint arXiv:1806.01260, 2018.
[12] V. Casser, S. Pirk, R. Mahjourian, and A. Angelova, "Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos," arXiv preprint arXiv:1811.06152, 2018.
[13] L. Doitsidis, A. Nelson, K. Valavanis, M. Long, and R. Murphy, "Experimental validation of a MATLAB based control architecture for multiple robot outdoor navigation," in Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation Intelligent Control, 2005., 2005: IEEE, pp. 1499-1505.
[14] L. Doitsidis, K. P. Valavanis, and N. Tsourveloudis, "Fuzzy logic based autonomous skid steering vehicle navigation," in Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), 2002, vol. 2: IEEE, pp. 2171-2177.
[15] G. Oriolo, G. Ulivi, and M. Vendittelli, "Real-time map building and navigation for autonomous robots in unknown environments," IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 28, no. 3, pp. 316-333, 1998.
[16] C. Rusu, I. Birou, and E. Szöke, "Fuzzy based obstacle avoidance system for autonomous mobile robot," in 2010 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR), 2010, vol. 1: IEEE, pp. 1-6.
[17] J. Levinson et al., "Towards fully autonomous driving: Systems and algorithms," in 2011 IEEE Intelligent Vehicles Symposium (IV), 2011: IEEE, pp. 163-168.
[18] B. Huval et al., "An empirical evaluation of deep learning on highway driving," arXiv preprint arXiv:1504.01716, 2015.
[19] F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, "3-D mapping with an RGB-D camera," IEEE transactions on robotics, vol. 30, no. 1, pp. 177-187, 2013.
[20] J. Gaspar, N. Winters, and J. Santos-Victor, "Vision-based navigation and environmental representations with an omnidirectional camera," IEEE Transactions on robotics and automation, vol. 16, no. 6, pp. 890-898, 2000.
[21] K. I. Khalilullah, S. Ota, T. Yasuda, and M. Jindai, "Development of robot navigation method based on single camera vision using deep learning," in 2017 56th annual conference of the society of instrument and control engineers of Japan (SICE), 2017: IEEE, pp. 939-942.
[22] W. Born and C. Lowrance, "Smoother Robot Control from Convolutional Neural Networks Using Fuzzy Logic," in 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 2018: IEEE, pp. 695-700.
[23] (2019年, 6月). ZED [Online]. Available: https://www.stereolabs.com/zed/.
[24] C. Godard, O. Mac Aodha, and G. J. Brostow, "Unsupervised monocular depth estimation with left-right consistency," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 270-279.
[25] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, "Enet: A deep neural network architecture for real-time semantic segmentation," arXiv preprint arXiv:1606.02147, 2016.
[26] (2019年, 6月). Haversine formula [Online]. Available: https://en.wikipedia.org/wiki/Haversine_formula.
[27] (2019年, 6月). Spherical trigonometry [Online]. Available: https://en.wikipedia.org/wiki/Spherical_trigonometry.
指導教授 王文俊 審核日期 2019-7-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明