博碩士論文 109323087 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:47 、訪客IP:13.58.135.0
姓名 黃元亨(Yuan-Heng Huang)  查詢紙本館藏   畢業系所 機械工程學系
論文名稱 混合視覺與光達感測的感知融合機器人定位系統
(A Computer Vision and LiDAR based Sensor Fusion System of Robot Localization)
相關論文
★ 微波化學強化碳化矽表面拋光之研究★ 智慧製造垂直系統整合之資產管理殼
★ 應用於智慧製造之網宇實體系統訓練資料異常檢知★ 應用深度學習與物聯網評估CNC加工時間
★ 結合遺傳演算法與類神經網路之 分散式機械結構最佳化系統之研究★ 以資料分散式服務發展智慧產品與其系統之研究
★ 精微產品組裝的智能人機協作系統★ YOLOv7 模型於小物件檢測之改良與應用
★ 應用分治法於刀具壽命預測模型之研究★ 自動化工作站排程系統之設計與應用
★ 基於區塊鏈之去中心化製造執行系統★ 應用於專案排程之混合蟻群演算法
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本文研究以往純光達搭配自適應蒙特卡羅定位法(AMCL)的機器人定位方法,並提出了基於機器視覺與光達感知融合的創新機器人定位系統。本創新定位系統將自主移動機器人(AMR)的定位流程拆解為全局定位、位姿追蹤、及綁架問題三大部分,在分析視覺與光達於各個部分的優劣後,依結果選擇各部分使用的感測器,以提升系統整體的可靠度與效能。藉由本定位系統,首先,將基於一對二維標籤(AprilTag)建立的空間座標系做為AMR全局定位的參考基準,解決光達訊號特徵稀缺,且無法快速、直接地辨識AMR位置的缺點,進而完成初始位姿定位。接著,基於標籤計算出的精確初始位置,利用光達訊號高精度、即時性的特性進行位姿追蹤,有效維持AMR的準確定位。最後,在位姿追蹤的過程裡,持續監測AMCL粒子的離散程度,以確保系統估計的位姿準確度。當總粒子變異數超出特定域值時,系統將會觸發全局定位功能進行重定位,解決綁架問題。
透過本文實驗數據結果,得證以下三項貢獻:(1) 相較於純光達定位系統,本定位系統能以較少的演算迭代與計算量完成AMR定位,並且能夠有效且自動地解決純光達定位系統,於幾何特徵重複性高的環境中難以定位的問題;(2) 利用本研究一對標籤的全局定位算法,可以有效解決單標籤定位時,座標轉換誤差的問題;(3)結合基於標籤的路徑點自動生成方法與上述定位系統,能夠簡化及自動化AMR系統前置作業流程,使本系統更具實用性。
摘要(英) This thesis studied the previous robot positioning method only using LiDAR and Adaptive Monte Carlo Localization (AMCL) and then proposes an innovative robot positioning system based on sensor fusion of computer vision (CV) and LiDAR. The positioning process of the automatic mobile robot (AMR) of this innovative system is divided into three parts: Global Localization, Pose Tracking, and Kidnapping Problem. After analyzing the advantages and disadvantages of CV and LiDAR in each part, it is studied to select appropriate sensors to ensure the overall reliability and high performance of this system. Through the positioning system, firstly, the spatial coordinate system constructed with two-dimensional tags (AprilTag) is applied to be the reference basis for the Global Localization. With this approach, both issues of the feature scarcity from the LiDAR signals and the inability to quickly identify the AMR position can be solved, and then the initial position can be determined. Secondly, based on the initial position, the accuracy of AMR position can be maintained by LiDAR because of its accuracy and real-time data acquisition. Finally, in Pose Tracking, the dispersion degree of AMCL particles, which is used to evaluate the certainty of localization, is continuously monitored. Once the variance of particles exceeds a certain threshold, the re-positioning function which solves the Kidnapping Problem will be triggered.
The results of these experiments prove three contributions. (1) Compared to the pure LiDAR positioning system, this positioning system can complete the AMR positioning with fewer iterations and computer consumptions, and can also effectively and automatically solve the environmental issue of high repeatability of geometric features. (2) The coordinate transformation error in Single Tag Global Localization method can be effectively eliminated by our Two Tags Global Localization method. (3) By combining the automatic waypoint generation method through tags and the above positioning system, the pre-operation process of the AMR system could be simplified and automated, which makes the system more practical.
關鍵字(中) ★ 自適應蒙特卡羅定位
★ 自主移動機器人
★ 二維標籤
★ 光學雷達
★ 全局定位
★ 感知融合
★ 座標轉換
關鍵字(英) ★ Adaptive Monte Carlo Localization(AMCL)
★ Autonomous Mobile Robot(AMR)
★ Two-Dimensional Tag(AprilTag)
★ LiDAR
★ Global Localization
★ Sensor Fusion
★ Coordinate Transformation
論文目次 摘要 i
ABSTRACT ii
目錄 iii
圖目錄 vi
表目錄 x
第一章、 緒論 1
1-1 研究背景 1
1-1-1 研究動機 1
1-1-2 工業AMR面臨的挑戰 3
1-2 文獻回顧 5
1-2-1 機器人定位問題與技術綜述 6
1-2-2 視覺標籤應用方法 9
1-2-3 基於光達之AMR全局定位問題研究現況 12
1-2-4 單目視覺融合2D光達 20
1-3 研究目的 23
1-4 論文架構 23
第二章、 關鍵技術分析 24
2-1 ROS機器人作業系統 24
2-1-1 ROS仲介軟體系統架構 24
2-2 AMR相關技術與工廠應用現況 25
2-2-1 何謂AMR? 25
2-2-2 AMR的系統原理 26
2-2-3 AMR於工廠中的應用方式 27
2-2-4 AMR建圖、定位與導航對應技術 28
2-3 自適應蒙特卡羅定位 30
2-3-1 粒子濾波器 30
2-3-2 蒙特卡羅定位 36
2-3-3 AMCL 自適應的體現 41
2-4 AprilTag空間定位標籤 43
2-4-1 AprilTag演算流程 43
2-4-2 AprilTag實際觀測效果與特性 46
第三章、 研究方法與流程 47
3-1 機器人定位系統架構 47
3-2 研究流程與演算方法 49
3-2-1 AprilTag位置資料蒐集與路徑點生成算法 50
3-2-2 AMR全局定位演算法理論 53
3-3 AprilTag標籤設置 58
第四章、 實驗設計與數據評估 62
4-1 實驗設備與環境 62
4-1-1 AMR - FESTO Robotino 62
4-1-2 實驗環境 65
4-1-3 實驗系統架構 67
4-2 研究理論之概念驗證 68
4-3 全局定位方法效能評估 70
4-3-1 CP Factory動態全局定位 71
4-3-2 長方形場地動態全局定位 76
4-3-3 CP Factory靜態全局定位 81
4-4 單Tag與雙Tag算法精度評估 83
第五章、 貢獻與未來展望 89
5-1 具體貢獻 89
5-2 未來展望 90
參考文獻 91
附錄A-AprilTag精度實驗數據 95
附錄B-同步定位與建圖-SLAM 99
SLAM類型 101
SLAM技術的數學表示 104
SLAM建立之地圖種類 107
參考文獻 [1] M. Gladwell. (2004). Choice, happiness and spaghetti sauce. Available: https://www.ted.com/talks/malcolm_gladwell_choice_happiness_and_spaghetti_sauce?language=en
[2] J. Pine. (2004). What consumers want. Available: https://www.ted.com/talks/joseph_pine_what_consumers_want
[3] H. ElMaraghy and W. ElMaraghy, "Smart adaptable assembly systems," Procedia CIRP, vol. 44, pp. 4-13, 2016.
[4] FESTO. (2021). FESTO Cyber-Physical Factory. Available: https://www.festo-didactic.com/int-en/highlights/qualification-for-industry-4.0
[5] G. Fragapane, D. Ivanov, M. Peron, F. Sgarbossa, and J. O. Strandhagen, "Increasing flexibility and productivity in Industry 4.0 production networks with autonomous mobile robots and smart intralogistics," Annals of operations research, pp. 1-19, 2020.
[6] MiR. (2020). MiR Fleet. Available: https://www.mobile-industrial-robots.com/en/solutions/robots/mir-accessories/mir-fleet/
[7] FESTO. (2021). FESTO Robotino Factory. Available: https://wiki.openrobotino.org/index.php?title=Robfactory
[8] Z. Su, X. Zhou, T. Cheng, H. Zhang, B. Xu, and W. Chen, "Global localization of a mobile robot using lidar and visual features," in 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2017, pp. 2377-2383: IEEE.
[9] J. Xiong, Y. Liu, X. Ye, L. Han, H. Qian, and Y. Xu, "A hybrid lidar-based indoor navigation system enhanced by ceiling visual codes for mobile robots," in 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2016, pp. 1715-1720: IEEE.
[10] S. Xu and W. Chou, "An improved indoor localization method for mobile robot based on WiFi fingerprint and AMCL," in 2017 10th International Symposium on Computational Intelligence and Design (ISCID), 2017, vol. 1, pp. 324-329: IEEE.
[11] H. Zhang, C. Zhang, W. Yang, and C.-Y. Chen, "Localization and navigation using QR code for mobile robot in indoor environment," in 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2015, pp. 2501-2506: IEEE.
[12] J. Wang, H. Zha, and R. Cipolla, "Coarse-to-fine vision-based localization by indexing scale-invariant features," IEEE Transactions on Systems, Man,Cybernetics, Part B, vol. 36, no. 2, pp. 413-422, 2006.
[13] S. Xu, W. Chou, and H. Dong, "A robust indoor localization system integrating visual localization aided by CNN-based image retrieval with Monte Carlo localization," Sensors, vol. 19, no. 2, p. 249, 2019.
[14] 胡章芳, 曾林全, 羅元, 羅鑫, and 趙立明, "融入二維碼信息的自適應蒙特卡洛定位演算法," 計算機應用, vol. 39, no. 4, pp. 989-993, 2019.
[15] 欒佳寧, 張偉, 孫偉, 張奧, and 韓冬, "基於二維碼視覺與激光雷達融合的高精度定位算法," 計算機應用, pp. 0-0, 2020.
[16] H. Automation. (2017). MiR Fleet-Setting up a fleet. Available: https://www.youtube.com/watch?app=desktop&v=wpm6oG5JJkg&ab_channel=HTEAutomation
[17] D. Fox, W. Burgard, and S. Thrun, "Markov localization for mobile robots in dynamic environments," Journal of artificial intelligence research, vol. 11, pp. 391-427, 1999.
[18] A. Yassin et al., "Recent advances in indoor localization: A survey on theoretical approaches and applications," IEEE Communications Surveys & Tutorials, vol. 19, no. 2, pp. 1327-1346, 2016.
[19] 高雲峰, 周倫, 呂明睿, and 劉文濤, "自主移動機器人室內定位方法研究綜述," 傳感器與微系統, 2013.
[20] K. Field. (2020). Tesla Achieved The Accuracy Of Lidar With Its Advanced Computer Vision Tech. Available: https://cleantechnica.com/2020/08/03/tesla-achieved-the-accuracy-of-lidar-with-its-advanced-computer-vision-tech/
[21] TMRobot. (2021). 達明機器人 TM LandMark界標對齊系統. Available: https://www.tm-robot.com/zh-hant/product/tm-landmark/
[22] D. Falanga, A. Zanchettin, A. Simovic, J. Delmerico, and D. Scaramuzza, "Vision-based autonomous quadrotor landing on a moving platform," in 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), 2017, pp. 200-207: IEEE.
[23] L. George and A. Mazel, "Humanoid robot indoor navigation based on 2D bar codes: Application to the NAO robot," in 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2013, pp. 329-335: IEEE.
[24] H. Kobayashi, "A new proposal for self-localization of mobile robot by self-contained 2d barcode landmark," in 2012 Proceedings of SICE annual conference (SICE), 2012, pp. 2080-2083: IEEE.
[25] Y. Nakazato, M. Kanbara, and N. Yokoya, "Wearable augmented reality system using invisible visual markers and an IR camera," in Ninth IEEE International Symposium on Wearable Computers (ISWC′05), 2005, pp. 198-199: IEEE.
[26] G. Nagymáté and R. M. Kiss, "Affordable gait analysis using augmented reality markers," PloS one, vol. 14, no. 2, p. e0212319, 2019.
[27] J. Zhao et al., "Visual semantic landmark-based robust mapping and localization for autonomous indoor parking," Sensors, vol. 19, no. 1, p. 161, 2019.
[28] G. Grisetti, C. Stachniss, and W. Burgard, "Improved techniques for grid mapping with rao-blackwellized particle filters," IEEE transactions on Robotics, vol. 23, no. 1, pp. 34-46, 2007.
[29] W. Tianyu, D. Wenbo, and W. Zhenyu, "Position and orientation measurement system based on monocular vision and fixed target," 紅外與激光工程, vol. 46, no. 4, pp. 427003-0427003 (8), 2017.
[30] T. Foote. (2020). ROS Offical Website. Available: http://wiki.ros.org/
[31] T. Foote. (2017). ROS-tf package summary. Available: http://wiki.ros.org/tf
[32] R. Siegwart, I. R. Nourbakhsh, and D. Scaramuzza, Introduction to autonomous mobile robots. MIT press, 2011.
[33] C. Stachniss. (2013). Robot Mapping Tutorials. Available: http://ais.informatik.uni-freiburg.de/teaching/ws13/mapping/
[34] 維基百科. (2021). 粒子濾波器. Available: https://zh.wikipedia.org/wiki/%E7%B2%92%E5%AD%90%E6%BF%BE%E6%B3%A2%E5%99%A8
[35] E. Olson, "AprilTag: A robust and flexible visual fiducial system," in 2011 IEEE International Conference on Robotics and Automation, 2011, pp. 3400-3407: IEEE.
[36] J. Wang and E. Olson, "AprilTag 2: Efficient and robust fiducial detection," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 4193-4198: IEEE.
[37] J. Lee, K.-C. Lee, S. Cho, and S.-H. Sim, "Computer vision-based structural displacement measurement robust to light-induced image degradation for in-service bridges," Sensors, vol. 17, no. 10, p. 2317, 2017.
[38] J. Kallwies, B. Forkel, and H.-J. Wuensche, "Determining and Improving the Localization Accuracy of AprilTag Detection," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 8288-8294: IEEE.
[39] G. Hoorn. (2019). Gmapping Package Summary. Available: http://wiki.ros.org/gmapping
[40] (2021). Detailed Technical Specifications - SICK S300. Available: https://www.manualslib.com/manual/1191410/Sick-S300-Standard.html?page=2#manual
[41] (2021). Logitech C920 HD PRO WEBCAM Specification. Available: https://www.logitech.com/en-us/products/webcams/c920-pro-hd-webcam.960-000764.html#specs
[42] F. Larribe. (2020). ROS Navigation Stack Summary. Available: http://wiki.ros.org/navigation
[43] S. Bruno and K. Oussama, Handbook of Robotics. Springer, 2016
指導教授 林錦德(Chin-Te Lin) 審核日期 2021-7-13
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明