以作者查詢圖書館館藏 、以作者查詢臺灣博碩士 、以作者查詢全國書目 、勘誤回報 、線上人數:29 、訪客IP:3.137.180.62
姓名 林承諺(Cheng-Yan Lin) 查詢紙本館藏 畢業系所 機械工程學系 論文名稱 應用ROS平台疊合深度攝影機及光達之SLAM
(The Integration of SLAM Between the Stereo Camera and Lidar in ROS)相關論文 檔案 [Endnote RIS 格式] [Bibtex 格式] [相關文章] [文章引用] [完整記錄] [館藏目錄] 至系統瀏覽論文 ( 永不開放) 摘要(中) 隨著機器人的快速發展及復雜化,代碼復用性需求也越來越強烈,因此
誕生了很多開源機器人系統,ROS(Robot Operating System)就是其中之一。
ROS 是專為機器人軟件開發所設計出來的一套電腦操作系統架構。它是一
個開源的元級操作系統(後操作系統),其中包括硬件抽象描述、底層驅動
程序管理、共用功能的執行、程序間消息傳遞、程序發行包管理,它也提供
一些工具和開源庫用於獲取、建立、編寫和執行多機融合的程序。由於機器
人在自動化產業需求日益趨增,要能投入醫療及加工廠等不同應用場景,行
動過程就必須仰賴環境空間進階感測搭配自身載具導航至定點、同時判斷
被服務對象與機器人間的距離與環境空間限制,本論文主要為完成一套基
於 ROS 系統下運用 SLAM 技術獲取地圖資訊及定位系統,並且加入物件辨
識人體骨架追蹤技術,此套系統主要分為 ZED2 深度攝影機與光達兩種傳
感器,ZED2 主要提供 IMU、RGB 色彩資訊及地圖美化,光達則提供較精
確定位的系統摘要(英) With the development and specialization of robots, there is a higher demand for the functions of robots. Therefore, many open source robot systems have been born, and ROS (Robot Operating System) is one of them. ROS has its own computer operating system for robotic software development. It is an open source meta-level operating system (post-operating system) that provides a set of kernel services, including hardware description, related driver management, execution of consumption functions, message transfer between programs, and program release package management. It also provides some tools and open source libraries for obtaining, creating, writing and multi-machine integration programs. As the demand for robot automation continues to grow, and to meet different application scenarios such as medical treatment and processing plants in the industry, the action process must rely on the advanced perception of the environmental space of the own vehicle to navigate to a fixed point. At the same time determine the distance between the service object and the robot. This paper is mainly to complete the acquisition of map information and positioning system and add object recognition human skeleton tracking technology based on the motion SLAM technology under the ROS system. This system is mainly divided into two types of sensors: ZED2 depth camera and Lidar. ZED2 mainly provides IMU, RGB color information and map beautification. LiDAR provides a more accurate positioning system and virtual field. 關鍵字(中) ★ 機器人作業系統
★ 深度攝影機
★ 光達
★ 同時定位與建模關鍵字(英) ★ ROS
★ Stereo camera
★ Lidar
★ SLAM論文目次 目錄
中文摘要...................................................................................................................................I
ABSTRACT ............................................................................................................................ II
誌 謝........................................................................................................................IV
目錄.........................................................................................................................................V
圖目錄.................................................................................................................................VIII
表目錄.....................................................................................................................................X
一、緒論..................................................................................................................................1
1-1 研究動機及目的.........................................................................................................1
1-2 文獻回顧.....................................................................................................................2
1-3 為什麼用 ROS? ..........................................................................................................4
1-4 論文架構.....................................................................................................................6
二、實驗系統及使用軟體介紹..............................................................................................8
2-1 ROS 基礎.....................................................................................................................8
2-1-1 檔案系統層 .....................................................................................................8
2-1-2 計算圖層 .........................................................................................................9
2-2 深度攝影機 ZED2 ....................................................................................................11
2-3 光達 Velodyne VLP-16 ............................................................................................14
2-4 伺服器及資料庫.......................................................................................................15
2-5 路由器 O-ring ...........................................................................................................17
2-6 Jetson Xavier..............................................................................................................19
2-7 5G 基地台..................................................................................................................21
三、感測原理及檢測技術....................................................................................................21
3-1 虛擬場景建立...........................................................................................................21
3-1-1 SLAM .............................................................................................................22
VI
3-1-2 SLAM 數學表述 ............................................................................................22
3-2 3D 感測技術..............................................................................................................25
3-2-1 TOF 技術........................................................................................................25
3-2-2 人體骨架定位技術 .......................................................................................26
3-2-3 深度資訊掃描 ...............................................................................................27
3-3 實時外觀建圖(RTAB-Map).....................................................................................27
3-4 ROS 之中的通訊.......................................................................................................30
3-5 ROS 工具...................................................................................................................31
3-5-1 ROS 視覺化工具(RViz).................................................................................31
3-5-2 ROS-rqt_plot 及 rqt_graph .............................................................................34
3-6 Blender .......................................................................................................................36
四、實驗架構及流程............................................................................................................37
4-1 硬體架構...................................................................................................................37
4-2 軟體架構...................................................................................................................40
4-3 實驗流程...................................................................................................................46
4-3-1 硬體模型建置 ...............................................................................................47
4-3-2 感測器座標調整及 xacro 檔建置.................................................................47
4-3-3 ROS 遠端設定................................................................................................49
4-3-4 Launch 檔建置 ..............................................................................................50
4-3-5 實驗環境-中央大學機械系之智慧工廠......................................................52
五、實驗結果........................................................................................................................56
5-1-1 伺服器 ROS 中在 Rviz 開啟之 topic ...........................................................56
5-1-2 伺服器 ROS 中 Rviz 之人體骨架追蹤技術................................................57
5-1-3 伺服器 ROS 中 ZED2-RGB 及 Velodyne-16 點雲成像..............................59
5-1-4 伺服器 ROS 中 Rviz-ZED2 及 Rviz-Velodyne-16 點雲疊合成像 .............59
5-1-5 伺服器 ROS 中 ZED2 及 Velodyne-16 RTAB-MAP 成像 .........................61
VII
5-1-6 虛擬場域優化 ...............................................................................................63
六、結論與未來展望............................................................................................................65
6-1 結論...........................................................................................................................65
6-2 未來展望...................................................................................................................65
七、參考文獻........................................................................................................................67參考文獻 ﹝1﹞ B. Barshan and H. F. Durrant-Whyte, “Inertial Navigation Systems for Mobile Robots,”
IEEE Transactions on Robotics and Automation, vol. 11, no. 3, pp. 328–342, 1995.
﹝2﹞ G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with
Rao-Blackwellized particle filters,” IEEE Transactions on Robotics, vol. 23, no. 1, pp.
34–46, 2007.
﹝3﹞ K. P. Murphy, “Bayesian map learning in dynamic environments,” Advances in Neural
Information Processing Systems 12, pp. 1015–1021, 2000.
﹝4﹞ S. Kohlbrecher, O. Von Stryk, J. Meyer, and U. Klingauf, “A flexible and scalable
SLAM system with full 3D motion estimation,” 9th IEEE International Symposium on
Safety, Security, and Rescue Robotics, SSRR 2011.
﹝5﹞ W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-time loop closure in 2d lidar slam,”
in Robotics and Automation (ICRA), 2016 IEEE International Conference on, pp. 1271–
1278, IEEE, 2016.
﹝6﹞ G. Klein and D. Murray, “Parallel tracking and mapping for small ar workspaces,” in
Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International
Symposium on, pp. 225–234, IEEE, 2007.
﹝7﹞ T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping (slam): Part
ii,” IEEE Robotics & Automation Magazine, vol. 13, no. 3, pp. 108–117, 2006.
﹝8﹞ M. Pizzoli, C. Forster, and D. Scaramuzza, “Remode: Probabilistic, monocular dense
reconstruction in real time,” in Robotics and Automation (ICRA), International
Conference on, pp. 2609–2616, IEEE, 2014.
﹝9﹞ E. Rosten and T. Drummond, “Machine Learning for High Speed Corner Detection,”
Computer Vision – ECCV 2006, vol. 1, pp. 430–443, 2006.
﹝10﹞ M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent
68
elementary features,” in European conference on computer vision, pp. 778–792,
Springer, 2010.
﹝11﹞ R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “Orb-slam: a versatile ´ and accurate
monocular slam system,” IEEE Trans. on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
﹝12﹞ E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to
sift or surf,” in Computer Vision (ICCV), 2011 IEEE international conference on, pp.
2564–2571, IEEE, 2011.
﹝13﹞ J. Engel, T. Schops, and D. Cremers, “Lsd-slam: Large-scale direct ¨ monocular slam,”
in European Conference on Computer Vision, pp. 834– 849, Springer, 2014.
﹝14﹞ J. Engel, J. Stuckler, and D. Cremers, “Large-scale direct slam with ¨ stereo cameras,”
in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on,
pp. 1935–1942, IEEE, 2015.
﹝15﹞ M. Labbe and F. Michaud, “Online global loop closure detection for ´ large-scale multisession graph-based SLAM,” IEEE Int. Conference on Intelligent Robots and Systems,
pp. 2661–2666, 2014.
﹝16﹞ M. Filipenko and I. Afanasyev, “Comparison of Various SLAM Systems for
MobileRobot in an Indoor Environment”, 2018 International Conference on Intelligent
Systems (IS), pp. 400-407, IEEE, 2018.
﹝17﹞ F. Pomerleau, F. Colas, R. Siegwart, et al., “A review of point cloud registration
algorithms for mobile robotics,” Foundations and Trends R in Robotics, vol. 4, no. 1,
pp. 1–104, 2015.
﹝18﹞ D. Droeschel, M. Nieuwenhuisen, M. Beul, D. Holz, J. Stuckler, and ¨ S. Behnke,
“Multilayered mapping and navigation for autonomous micro aerial vehicles,” Journal
of Field Robotics, vol. 33, no. 4, pp. 451–475, 2016.
﹝19﹞ T. Shan and B. Englot, “LeGO-LOAM: Lightweight and groundoptimized lidar
odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference
69
on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 4758–4765.
﹝20﹞ J.Geng, “Structured-light 3D surface imaging: a tutorial,” Advances in Optics and
Photonics, Vol. 3, Issue 2, pp. 128-160, 2011.指導教授 董必正(Pi-Cheng Tung) 審核日期 2021-10-22 推文 facebook plurk twitter funp google live udn HD myshare reddit netvibes friend youpush delicious baidu 網路書籤 Google bookmarks del.icio.us hemidemi myshare