博碩士論文 104582004 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:39 、訪客IP:18.219.116.93
姓名 王建鈞(Chien-Chun Wang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 LaVIS:融合雷射、視覺、慣性感測器的機器人定位和導航系統
(LaVIS: Laser,Vision and Inertiak Sensing Fusion for Robot Positioning and Navigation)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2025-7-21以後開放)
摘要(中) 近年來Lidar感測器被大量應用在掃地機器人環境感測,但通常僅提供平面的環境掃描,而缺乏高度的資訊。當掃地機器人進入某空間領域,可能因高度的限制無法離開此空間。使用視覺Simultaneous localization and mapping (SLAM)雖然可以解決空間障礙的問題,但精確度(accuracy)和強健性(robustness)仍存在改善空間。本研究提出一種感測融合方法,透過Lidar感測器建立第一層基礎(baseline)地圖,融合Lidar與RGBD攝影機的深度資訊與物件特徵建立空間影像,我們以機器人的Inertial measurement unit (IMU)座標系統作為母座標系統,讓其他感測器為子座標系統來定位目前機器人初始座標系統參考點。機器人的移動軌跡也結合進座標系統,透過運動編碼器與IMU資訊獲得運動路徑資訊,與影像、Lidar座標融合,我們使用A-Star與Dynamic window approach (DWA)融合兩種演算法從事機器人路徑規劃,完成一個以2D Lidar為主的感測融合SLAM系統:LaVIS。此一系統可以讓掃地機器人快速建立3D地圖。並規劃最佳路徑,動態閃避障礙物。最後我們以Machine Intelligence and Automation Technology (MIAT)方法論進行LaVIS系統設計和整合驗證,我們的實驗證明LaVIS系統具有良好的SLAM性能和精確性,可以應用在掃地機器人和廣泛的自主型機器人。
摘要(英) In recent years, Lidar sensors have been widely used in Vaccum robot environment sensing. Usually, only planar environmental scanning provides, but height information is lacking. When the Vaccum robot enters a specific space, it may not be able to leave this space due to height restrictions. Although using visual Simultaneous localization and mapping (SLAM) can solve the problem of spatial obstacles, there is still room for improvement in accuracy and robustness. We proposes a sensor fusion method, which uses Lidar sensors to create a first-level baseline map, fuse Lidar and RGBD camera depth information and object features to create a spatial image. We use the robot’s Inertial measurement unit (IMU) coordinate system as the parent coordinate system, Let other sensors be the sub-coordinate system to locate the current robot initial coordinate system reference point. The trajectory of the robot is also integrated into the coordinate system. The motion path information is obtained through the motion encoder and IMU information. It is fused with the image and Lidar coordinates. We use A-Star and Dynamic window approach (DWA) to integrate two algorithms for robot path planning. Lidar-based sensing fusion SLAM system: LaVIS. This system allows sweeping robots to build 3D maps quickly. And plan the best path to avoid obstacles dynamically. Finally, we use the Machine Intelligence and Automation Technology (MIAT) methodology for LaVIS system design and integration verification. Our experiments prove that the LaVIS system has good SLAM performance and accuracy, and can be applied to Vaccum robots and a wide range of autonomous robots. 
關鍵字(中) ★ 感測融合
★ 定位
★ 導航
關鍵字(英) ★ Sensing Fusion
★ Positioning
★ Navigation
論文目次 摘要 ii
Abstract iii
誌謝 iv
Table of Contents v
List of Figures vii
List of Tables xi
Chapter 1. Introduction 1
1.1 Robotics History 1
1.2 Research motivation and purpose 4
1.3 Research methods 5
Chapter 2. Literature Review 9
2.1 What is SLAM? 9
2.2 Development of SLAM 16
2.3 SLAM Sensor Fusion 29
2.4 SLAM & Robot RelationShip 30
Chapter 3. Related works 32
3.1 Lidar 32
3.2 RGBD 33
3.3 IMU 42
3.4 Route Plan 49
Chapter 4. LaVIS Sensor fusion design 57
4.1 Sensor Fusion 57
4.2 MIAT Methodology 62
4.3 LaVIS System Structure 68
4.4 LaVIS system with MIAT method 69
Chapter 5. LaVIS experimental environment 76
5.1 Software 76
5.2 Hardware 79
Chapter 6. LaIVS experiment 85
6.1 IMU calibration experiment 85
6.2 Lidar SLAM 89
6.3 RGBD Sensor and VSLAM 94
6.4 LaVIS Fusion 96
Chapter 7. Conclusion 100
References 101
參考文獻 [1] M. Bujanca, P. Gafton, S. Saeedi, A. Nisbet, B. Bodin, M. F. P. O. Boyle, A. J. Davison, P. H. J. Kelly, G. Riley, B. Lennox, M. Luján, and S. Furber, "SLAMBench 3.0: Systematic Automated Reproducible Evaluation of SLAM Systems for Robot Vision Challenges and Scene Understanding," in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 6351-6358.
[2] H. Chen, H. Huang, Y. Qin, Y. Li, and Y. Liu, "Vision and laser fused SLAM in indoor environments with multi-robot system," Assembly Automation, vol. 39, no. 2, 2019, pp. 297-307.
[3] H. Durrant-Whyte and T. Bailey, "Simultaneous localization and mapping: part I," IEEE robotics & automation magazine, vol. 13, no. 2, 2006, pp. 99-110.
[4] L. Heng, D. Honegger, G. H. Lee, L. Meier, P. Tanskanen, F. Fraundorfer, and M. Pollefeys, "Autonomous visual mapping and exploration with a micro aerial vehicle," Journal of Field Robotics, vol. 31, no. 4, 2014, pp. 654-675.
[5] H. Lategahn, A. Geiger, and B. Kitt, "Visual SLAM for autonomous ground vehicles," in 2011 IEEE International Conference on Robotics and Automation, 2011, pp. 1732-1737.
[6] D. Chekhlov, A. P. Gee, A. Calway, and W. Mayol-Cuevas, "Ninja on a plane: Automatic discovery of physical planes for augmented reality using visual slam," in Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007, pp. 1-4.
[7] L. Majer, "SLAM Localization Using Stereo Camera," Czech Technical University in Prague, Faculty of Electrical Engineering Department of Cybernetics, 2019.
[8] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, "Monte Carlo localization for mobile robots," in Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), 1999, vol. 2, pp. 1322-1328.
[9] Y. Liang, J. Dai, K. Wang, X. Li, and P. Xu, "A Strong Tracking SLAM Algorithm Based on the Suboptimal Fading Factor," Journal of Sensors, vol. 2018, 2018/05/03 2018, p. 9684382.
[10] R. Kümmerle, B. Steder, C. Dornhege, M. Ruhnke, G. Grisetti, C. Stachniss, and A. Kleiner, "On measuring the accuracy of SLAM algorithms," Autonomous Robots, vol. 27, no. 4, 2009, p. 387.
[11] F. Blochliger, M. Fehr, M. Dymczyk, T. Schneider, and R. Siegwart, "Topomap: Topological mapping and navigation based on visual slam maps," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 1-9.
[12] J. Choi, "Hybrid map-based SLAM using a Velodyne laser scanner," 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2014, pp. 3082-3087.
[13] B. Ichter and M. Pavone, "Robot motion planning in learned latent spaces," IEEE Robotics and Automation Letters, vol. 4, no. 3, 2019, pp. 2407-2414.
[14] E. D. Demaine, S. P. Fekete, P. Keldenich, H. Meijer, and C. Scheffer, "Coordinated motion planning: Reconfiguring a swarm of labeled robots with bounded stretch," SIAM Journal on Computing, vol. 48, no. 6, 2019, pp. 1727-1762.
[15] M. Kusuma and C. Machbub, "Humanoid Robot Path Planning and Rerouting Using A-Star Search Algorithm," in 2019 IEEE International Conference on Signals and Systems (ICSigSys), 2019, pp. 110-115.
[16] D. Zou, Y. Wu, L. Pei, H. Ling, and W. Yu, "StructVIO: visual-inertial odometry with structural regularity of man-made environments," IEEE Transactions on Robotics, vol. 35, no. 4, 2019, pp. 999-1013.
[17] C. Chen, L. Pei, C. Xu, D. Zou, Y. Qi, Y. Zhu, and T. Li, "Trajectory Optimization of LiDAR SLAM Based on Local Pose Graph," in China Satellite Navigation Conference, 2019, pp. 360-370.
[18] S. S. Ali, A. Hammad, and A. S. T. Eldien, "Fastslam 2.0 tracking and mapping as a cloud robotics service," Computers & Electrical Engineering, vol. 69, 2018, pp. 412-421.
[19] N. Yu and B. Zhang, "An Improved Hector SLAM Algorithm based on Information Fusion for Mobile Robot," in 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS), 2018, pp. 279-284.
[20] X. S. Le, L. Fabresse, N. Bouraqadi, and G. Lozenguez, "Evaluation of out-of-the-box ros 2d slams for autonomous exploration of unknown indoor environments," in International Conference on Intelligent Robotics and Applications, 2018, pp. 283-296.
[21] K. Krinkin, A. Filatov, A. yom Filatov, A. Huletski, and D. Kartashov, "Evaluation of modern laser based indoor slam algorithms," in 2018 22nd Conference of Open Innovations Association (FRUCT), 2018, pp. 101-106.
[22] Z. Wang, Y. Chen, Y. Mei, K. Yang, and B. Cai, "IMU-Assisted 2D SLAM Method for Low-Texture and Dynamic Environments," Applied Sciences, vol. 8, no. 12, 2018, p. 2534.
[23] B. Balasuriya, B. Chathuranga, B. Jayasundara, N. Napagoda, S. Kumarawadu, D. Chandima, and A. Jayasekara, "Outdoor robot navigation using Gmapping based SLAM algorithm," in 2016 Moratuwa Engineering Research Conference (MERCon), 2016, pp. 403-408.
[24] W. Hess, D. Kohler, H. Rapp, and D. Andor, "Real-time loop closure in 2D LIDAR SLAM," in 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 1271-1278.
[25] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, "MonoSLAM: Real-time single camera SLAM," IEEE transactions on pattern analysis and machine intelligence, vol. 29, no. 6, 2007, pp. 1052-1067.
[26] J. Shi, "Good features to track," in 1994 Proceedings of IEEE conference on computer vision and pattern recognition, 1994, pp. 593-600.
[27] G. Klein and D. Murray, "Parallel tracking and mapping for small AR workspaces," in 2007 6th IEEE and ACM international symposium on mixed and augmented reality, 2007, pp. 225-234.
[28] G. Klein and D. Murray, "Parallel tracking and mapping on a camera phone," in 2009 8th IEEE International Symposium on Mixed and Augmented Reality, 2009, pp. 83-86.
[29] J. Civera, A. J. Davison, and J. M. Montiel, "Inverse depth parametrization for monocular SLAM," IEEE transactions on robotics, vol. 24, no. 5, 2008, pp. 932-945.
[30] J. Engel, T. Schöps, and D. Cremers, "LSD-SLAM: Large-scale direct monocular SLAM," in European conference on computer vision, 2014, pp. 834-849.
[31] D. Caruso, J. Engel, and D. Cremers, "Large-scale direct SLAM for omnidirectional cameras," in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 141-148.
[32] C. Forster, Z. Zhang, M. Gassner, M. Werlberger, and D. Scaramuzza, "SVO: Semidirect visual odometry for monocular and multicamera systems," IEEE Transactions on Robotics, vol. 33, no. 2, 2016, pp. 249-265.
[33] J. Engel, V. Koltun, and D. Cremers, "Direct sparse odometry," IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 3, 2017, pp. 611-625.
[34] T. Qin, P. Li, and S. Shen, "VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator," IEEE Transactions on Robotics, vol. 34, no. 4, 2018, pp. 1004-1020.
[35] R. Li, S. Wang, Z. Long, and D. Gu, "Undeepvo: Monocular visual odometry through unsupervised deep learning," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 7286-7291.
[36] P. Wei, L. Cagle, T. Reza, J. Ball, and J. Gafford, "LiDAR and camera detection fusion in a real-time industrial multi-sensor collision avoidance system," Electronics, vol. 7, no. 6, 2018, p. 84.
[37] J. C. Chow, D. D. Lichti, J. D. Hol, G. Bellusci, and H. Luinge, "Imu and multiple RGB-D camera fusion for assisting indoor stop-and-go 3D terrestrial laser scanning," Robotics, vol. 3, no. 3, 2014, pp. 247-280.
[38] Y. Li, "Ros-Based Sensor Fusion and Motion Planning for Autonomous Vehicles: Application to Automated Parkinig System," Wayne State University, 2019.
[39] A. Shaukat, P. C. Blacker, C. Spiteri, and Y. Gao, "Towards camera-LIDAR fusion-based terrain modelling for planetary surfaces: Review and analysis," Sensors, vol. 16, no. 11, 2016, p. 1952.
[40] C.-H. Chen, C.-C. Wang, and M.-C. Yan, "Robust tracking of multiple persons in real-time video," Multimedia Tools and Applications, vol. 75, no. 23, 2016, pp. 16683-16697.
[41] C.-H. Chen, M.-C. Wu, and C.-C. Wang, "Cloud-based Dialog Navigation Agent System for Service Robots," Sensors and Materials, vol. 31, no. 6, 2019, pp. 1871-1891.
[42] C.-H. Chen, C.-C. Wang, Y. T. Wang, and P. T. Wang, "Fuzzy logic controller design for intelligent robots," Mathematical Problems in Engineering, vol. 2017, 2017.
指導教授 陳慶瀚(Ching-Han Chen) 審核日期 2020-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明