博碩士論文 109522609 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:81 、訪客IP:3.137.189.14
姓名 林文韬(Wen-Tao Lin)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於ROS2.0的AGV棧板對接導航系統設計
(A ROS2.0-based AGV pallet docking area navigation system design)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-1-12以後開放)
摘要(中) AGV是倉儲系統中是不可或缺的角色,但AGV的主流引導模式:有軌引導,和更爲先進的引導模式:無軌引導,皆存在諸多缺點。隨著“工業4.0”時代的來臨,智慧倉儲中AGV的引導模式也開始由有軌引導向無軌引導過渡。為銜接兩種引導模式,且解決新舊兩種引導模式的不足,本研究以MIAT方法論為設計基礎,提出了基於ROS2.0的AGV棧板對接位導航系統,此系統結合了SLAM製圖定位、深度學習物件辨識、及二維碼定位演算法,以獲取棧板的精確位置,並將AGV導航至棧板對接位。本系統將兩種引導模式優勢結合互補,提供穩定且高效的引導效率。通過實驗,本系統在簡單場景中能夠達到94%的成功率,複雜場景中導航也能展現出成功率84%的優異表現,證明本系統能夠很好的完成AGV棧板對接位導航這項任務。本研究提出的系統在工作環境條件允許的情況下,可以很好的完成棧板對接位導航這項任務,同時擁有較强的靈活度及優異的成功率。
摘要(英) AGV is an important role in the intelligent warehousing, but there are many shortcomings in the mainstream guidance mode of AGV: visual track guidance, and the state-of-the-art guidance mode: trackless guidance. With the advent of the "Industry 4.0" era, the guidance mode of AGV in intelligent warehousing has also begun to transition from track guidance to trackless guidance. In order to connect the two guidance modes and solve the shortcomings of the old and new guidance modes, this study uses MIAT methodology is the basis of the design, and a ROS2.0-based AGV pallet docking navigation system is proposed. This system combines SLAM mapping positioning, deep learning object recognition, and two-dimensional code positioning algorithms to obtain the precise position of the pallet, and Navigate the AGV to the pallet docking position.This system combines the advantages of the two guiding modes and complements each other to provide stable and efficient guiding efficiency. Through experiments, this system can achieve a success rate of 94% in simple scenes, and the navigation in complex scenes can also show an excellent performance with a success rate of 84%, which proves that this system can well complete the task of navigating the docking position of AGV pallets .The system proposed in this study is capable of navigating the docking position of pallets when the working environment conditions permit, and has strong flexibility and excellent success rate.
關鍵字(中) ★ 機器人操作系統
★ 無人搬運車
★ 視覺導航
★ MIAT方法論
★ 同時定位和建圖
關鍵字(英) ★ ROS2
★ AGV
★ visual navigation
★ MIAT methodology
★ SLAM
論文目次 目錄
摘要 I
Abstract II
誌謝 III
目錄 V
圖目錄 VIII
表目錄 X
第一章、緒論 1
1.1研究背景 1
1.2研究動機 3
1.3研究目標 3
1.4論文架構 4
第二章、棧板辨識定位及導航技術回顧 5
2.1 SLAM 5
2.1.1 Lidar SLAM 5
2.1.2 Visual SLAM 9
2.1.3 基於SLAM的路徑規劃 10
2.1.4 Lidar與相機互補避障 10
2.2基於基準標記的tag偵測及位姿估測演算法 11
2.2.1 Apriltag detector 12
2.2.2 單應性變換求解位姿 13
2.3棧板辨識 15
2.3.1 基於點雲的棧板辨識 15
2.3.2 基於視覺的棧板辨識 16
2.3.3 基於YOLACT演算法的棧板辨識 17
2.4 ROS2機器人操作系統 17
第三章、結合棧板辨識的AGV導航系統設計 19
3.1結合棧板辨識的導航系統設計 19
3.2 SLAM子系統 22
3.3棧板辨識引導控制子系統 24
3.3.1 定位計算設計 26
3.4插孔中心定位子系統 28
3.5棧板插孔引導控制子系統 29
第四章、系統驗證與實驗 32
4.1實驗環境 32
4.1.1 視覺感測器模組 32
4.1.2 镭射测距感測器模組 33
4.1.3 軟體開發工具及模擬環境建置軟體 34
4.1.4 實驗模擬環境 36
4.2系統功能模塊驗證 38
4.2.1 SLAM子系統功能驗證 38
4.2.2 插孔中心定位子系統功能驗證 40
4.2.3 棧板插孔引導控制子系統功能驗證 42
4.3系統模擬 44
4.3.1 Navigation車體控制 45
4.3.2 棧板辨識車體控制 46
4.3.3 棧板中心辨識車體控制 49
4.3.4 實驗總結 50
第五章、結論與未來展望 52
5.1結論 52
5.2未來展望 53
參考文獻 54
附錄 58
參考文獻 [1] P. Zajac and T. Rozic, "Energy consumption of forklift versus standards, effects of their use and expectations," Energy, vol. 239, p. 122187, 2022/01/15/ 2022.
[2] M. intelligence, "FORKLIFT TRUCKS MARKET - GROWTH, TRENDS, COVID-19 IMPACT, AND FORECAST (2022 - 2027)," ed, 2022.
[3] OSHA, "Forklift Accident Statistics," ed, 2021.
[4] V. Vishnu, Vishwaschandra, and A. Siddiqui, "Automated Guided Vehicle with ARDUINO Using Magnetic Sensor," International Journal of Emerging Trends & Technology in Computer Science, vol. 5, pp. 91-92, 03/17 2017.
[5] R. Gate, "Design and Methodology of Automated Guided Vehicle-A Review," The International Journal of Robotics Research, vol. Volume 1, p. 25, 05/27 2022.
[6] H. Xu, J. Xia, Z. Yuan, and P. Cao, "Design and Implementation of Differential Drive AGV Based on Laser Guidance," in 2019 3rd International Conference on Robotics and Automation Sciences (ICRAS), pp. 112-117, 2019.
[7] H. Durrant-Whyte and T. Bailey, "Simultaneous localization and mapping: part I," IEEE Robotics & Automation Magazine, vol. 13, no. 2, pp. 99-110, 2006.
[8] D. Kiss-Illés, C. Barrado, and E. Salamí, "GPS-SLAM: An Augmentation of the ORB-SLAM Algorithm," Sensors, vol. 19, no. 22, p. 4973, 2019.
[9] C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, "Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam," IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874-1890, 2021.
[10] H. Carlsson, I. Skog, G. Hendeby, and J. Jaldén, "Inertial Navigation Using an Inertial Sensor Array," arXiv preprint arXiv:2201.11983, 2022.
[11] T. Li, Q. Jin, B. Huang, C. Li, and M. Huang, "Cargo pallets real‐time 3D positioning method based on computer vision," The Journal of Engineering, vol. 2019, no. 23, pp. 8551-8555, 2019.
[12] M. Knitt, J. Schyga, A. Adamanov, J. Hinckeldeyn, and J. Kreutzfeldt, "Estimating the Pose of a Euro Pallet with an RGB Camera based on Synthetic Training Data," arXiv preprint arXiv:2210.06001, 2022.
[13] J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield, "Deep object pose estimation for semantic robotic grasping of household objects," arXiv preprint arXiv:1809.10790, 2018.
[14] Y. Y. Li, X. H. Chen, G. Y. Ding, S. Wang, W. C. Xu, B. B. Sun, and Q. Song, "Pallet detection and localization with RGB image and depth data using deep learning techniques," in 2021 6th International Conference on Automation, Control and Robotics Engineering (CACRE), pp. 306-310, 2021.
[15] C.-H. Chen and C.-T. Liu, "A 3.5-Tier Container-Based Edge Computing Architecture," Computers & Electrical Engineering, vol. 93, p. 107227, July 2021.
[16] C.-H. Chen, M.-Y. Lin, and Y.-C. Shih, "High-Precision Time Synchronization Chip Design for Industrial Sensor and Actuator Network," Microprocessors and Microsystems, vol. 91, p. 104507, June 2022.
[17] C.-H. Chen, M.-Y. Lin, and J.-C. Liou, "Edge Computing Gateway of the Industrial Internet of Things Using Multiple Collaborative Microcontrollers," IEEE Network, vol. 32, no. 1, pp. 24-32, January/February 2018.
[18] C.-H. Chen, M.-Y. Lin, and X.-C. Guo, "High Performance Fieldbus Application-Specific Integrated Circuit Design for Industrial Smart Sensor Networks," The Journal of Supercomputing, vol. 74, September 2018.
[19] C.-H. Chen, M.-Y. Lin, and X.-C. Guo, "High-level modeling and synthesis of smart sensor networks for Industrial Internet of Things," Computers & Electrical Engineering, vol. 61, pp. 48-66, July 2017.
[20] D. Edrich, R. McWilliams, and N. Wolf, "Single beam laser induced fluorescence technique for plasma transport measurements," Review of Scientific Instruments, vol. 67, no. 8, pp. 2812-2817, 1996.
[21] M.-C. Amann, T. Bosch, M. Lescure, R. Myllylä, and M. Rioux, "Laser ranging: a critical review of usual techniques for distance measurement," Optical Engineering, vol. 40, pp. 10-19, 2001.
[22] W. Hess, D. Kohler, H. Rapp, and D. Andor, "Real-time loop closure in 2D LIDAR SLAM," in 2016 IEEE international conference on robotics and automation (ICRA), pp. 1271-1278, 2016.
[23] G. Grisetti, C. Stachniss, and W. Burgard, "Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters," IEEE Transactions on Robotics, vol. 23, no. 1, pp. 34-46, 2007.
[24] S. Y. Loo, S. Mashohor, S. H. Tang, and H. Zhang, "DeepRelativeFusion: Dense Monocular SLAM using Single-Image Relative Depth Prediction," in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6641-6648, 2020.
[25] J. Engel, T. Schöps, and D. Cremers, "LSD-SLAM: Large-scale direct monocular SLAM," in European conference on computer vision, pp. 834-849, 2014.
[26] F. Duchoň, A. Babinec, M. Kajan, P. Beňo, M. Florek, T. Fico, and L. Jurišica, "Path planning with modified a star algorithm for a mobile robot," Procedia Engineering, vol. 96, pp. 59-69, 2014.
[27] A. Javaid, "Understanding Dijkstra′s algorithm," Available at SSRN 2340905, 2013.
[28] D. Fox, W. Burgard, and S. Thrun, "The dynamic window approach to collision avoidance," IEEE Robotics & Automation Magazine, vol. 4, no. 1, pp. 23-33, 1997.
[29] S. Karaman and E. Frazzoli, "Sampling-based algorithms for optimal motion planning," The international journal of robotics research, vol. 30, no. 7, pp. 846-894, 2011.
[30] X. Qi, W. Wang, Z. Liao, X. Zhang, D. Yang, and R. Wei, "Object semantic grid mapping with 2D LiDAR and RGB-D camera for domestic robot navigation," Applied Sciences, vol. 10, no. 17, p. 5782, 2020.
[31] E. Olson, "AprilTag: A robust and flexible visual fiducial system," in 2011 IEEE International Conference on Robotics and Automation, pp. 3400-3407, 2011.
[32] J. Wang and E. Olson, "AprilTag 2: Efficient and robust fiducial detection," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4193-4198, 2016.
[33] J. Guo, P. Wu, and W. Wang, "A precision pose measurement technique based on multi-cooperative logo," Journal of Physics: Conference Series, vol. 1607, no. 1, p. 012047, 2020/08/01 2020.
[34] M. Fiala, "ARTag, a fiducial marker system using digital techniques," in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR′05), vol. 2, pp. 590-596 vol. 2, 2005.
[35] M. Kalaitzakis, S. Carroll, A. Ambrosi, C. Whitehead, and N. Vitzilaios, "Experimental Comparison of Fiducial Markers for Pose Estimation," in 2020 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 781-789, 2020.
[36] I. S. Mohamed, "Detection and Tracking of Pallets using a Laser Rangefinder and Machine Learning Techniques," 2017.
[37] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, "Ssd: Single shot multibox detector," in European conference on computer vision, pp. 21-37, 2016.
[38] D. Bolya, C. Zhou, F. Xiao, and Y. J. Lee, "Yolact: Real-time instance segmentation," in Proceedings of the IEEE/CVF international conference on computer vision, pp. 9157-9166, 2019.
[39] Y. Maruyama, S. Kato, and T. Azumi, Exploring the performance of ROS2. pp. 1-10, 2016.
[40] N. Koenig and A. Howard, "Design and use paradigms for Gazebo, an open-source multi-robot simulator," in 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), vol. 3, pp. 2149-2154 vol.3, 2004.
[41] H. Kam, S.-H. Lee, T. Park, and C.-H. Kim, "RViz: a toolkit for real domain data visualization," Telecommunication Systems, vol. 60, pp. 1-9, 10/01 2015.
[42] O. Robotics. (2022). Overview and usage of RQt. Available: https://docs.ros.org/en/foxy/Concepts/About-RQt.html
[43] I. Abdelkader, Y. El-Sonbaty, and M. El-Habrouk, "Openmv: A Python powered, extensible machine vision camera," arXiv preprint arXiv:1711.10464, 2017.

指導教授 陳慶瀚(Ching-Han Chen) 審核日期 2023-1-17
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明