博碩士論文 111521005 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:14 、訪客IP:3.139.105.231
姓名 易彥廷(Yian-Ting Yi)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於雙光達架構之垃圾貯坑點雲資料建立與3D建模
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文主旨為設計一架設有兩台二維光達(或稱LiDAR)的垃圾貯坑,對貯坑內的垃圾進行掃描,在得到點雲於空間座標中的數據後,對該點雲建模並將建立完成的模型同步呈現於Unreal Engine架設的虛擬環境中呈現。
本論文的研究重點共有兩大部分,分別為利用點雲建立垃圾貯坑模型和結合二維影像與點雲建立貯坑之彩色模型,場域架設的方式參考本實驗室之論文[1]中設計之實驗場域並加以改裝,將論文中的單光達架構取代為雙光達同步掃描,藉由增加掃描的座標點數量來提高點雲的還原度。實現點雲建模的步驟如下:首先操控馬達配合皮帶運送架有兩台光達之木製平台對貯坑內的物體進行掃描,掃描後兩台光達便會回傳各自的點雲極座標數據,為了使點雲可呈現於三維空間,我們將得到的極座標轉換為3D空間座標,並且針對兩台光達之間的位置差距,分別將兩份點雲數據做位置校正使兩個光達產生的點雲可以呈現於同一座標系之下,最後將點雲中錯誤的座標點過濾掉並以批次檔指令匯入建模軟體Cloud Compare中進行建模,配合Unreal Engine建立之虛擬環境使該模型具備高度圖的效果;而影像與點雲的結合旨在提高模型的擬真程度,將各像素中的色彩資訊賦予點雲中的各點,使點雲除了具備(x,y,z)的三維空間數據之外還同時擁有影像中各像素的RGB資料,最後將此彩色點雲同步匯入Unreal Engine之中呈現,並且也同時以建模軟體將此點雲建為彩色模型。
為了提高研究的效率及使用者的使用體驗,研究過程完全在Windows環境下完成,以程式語言C#為主體將各項硬體的操作程序依序呼叫進入主執行程序之中執行,並且資料的傳輸及接收也同樣在Windows的環境下完成。
摘要(英) The main purpose of this thesis is to design a garbage pit equipped with two 2D LiDARs to scan the garbage inside the pit. After obtaining the point cloud data in spatial coordinates, the point cloud is modeled, and the completed model is synchronously presented in a virtual environment built with Unreal Engine.
The research in this thesis focuses on two main aspects: using point clouds to establish a model of the garbage pit and combining 2D images with point clouds to create a colored model of the pit. The setup method of the field refers to the experimental field designed in the laboratory′s thesis [1] and is modified by replacing the single LiDAR structure with a dual LiDAR synchronous scanning system. This increases the number of scanned coordinate points to improve the resolution of the point cloud. The steps to achieve point cloud modeling are as follows: first, the motor is controlled to operate a wooden platform with two LiDARs to scan the objects in the pit. After scanning, the two LiDARs return their respective point cloud polar coordinate data. To present the point cloud in three-dimensional space, we convert the obtained polar coordinates to 3D spatial coordinates. Considering the positional differences between the two LiDARs, we adjust the two sets of point cloud data to the same coordinate system. Finally, erroneous coordinate points in the point cloud are filtered out and imported into the modeling software Cloud Compare using batch file commands for modeling. The virtual environment established with Unreal Engine provides a topographic effect for the model. The combination of images and point clouds aims to improve the realism of the model by assigning the color information of each pixel to the points in the point cloud. This allows the point cloud to have not only (x, y, z) spatial data but also the RGB data of each pixel in the image. Finally, the colored point cloud is synchronously imported into Unreal Engine for presentation and modeled as a colored model using modeling software.
To improve research efficiency and user experience, the research process is entirely completed in a Windows environment. The main execution program is written in C#, sequentially calling the operation procedures of various hardware, and data transmission and reception are also completed in the Windows environment.
關鍵字(中) ★ 點雲處理
★ 馬達控制
★ 座標轉換
★ 二維光達
★ 感測融合
★ Unreal Engine
關鍵字(英) ★ motor control
★ point cloud processing
★ coordinate transformation
★ 2D LiDAR
★ sensor fusion
★ Unreal Engine
論文目次 目錄
摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vii
表目錄 ix
第一章 緒論 1
1.1研究背景與動機 1
1.2 文獻回顧 2
1.3 論文目標 3
1.4 論文架構 4
第二章 系統架構與軟硬體介紹 5
2.1系統架構 5
2.2硬體架構 6
2.2.1 筆記型電腦規格介紹 6
2.2.2 馬達規格介紹 7
2.2.3 光達規格介紹 7
2.3軟體介紹 9
2.3.1虛擬環境GAZEBO介紹 9
2.3.2開發軟體Unreal Engine介紹 9
2.3.3 光達軟體介紹 10
2.3.4馬達操作軟體介紹 10
第三章 實驗場域設計及點雲建模流程 11
3.1小型場域架構 11
3.1.1小型場域尺度規格 11
3.1.2 二維光達的安裝 12
3.1.3 小型場域馬達的安裝 12
3.2點雲掃描操作程序 13
3.2.1光達驅動及掃描角度計算 13
3.2.2馬達驅動及轉動距離計算 16
3.2.3座標轉換 17
3.3點雲建模 18
3.3.1點雲匯入及建模 19
3.3.2執行自動化 21
3.4 點雲資料處理 21
3.4.1點雲資料整合 22
3.4.2雜訊濾除 25
3.4.3點雲可視化視窗 27
第四章 虛擬環境開發 29
4.1虛擬環境GAZEBO 29
4.1.1虛擬環境設定 29
4.1.2單光達與雙光達的掃描模擬 30
4.2虛擬環境Unreal Engine 32
4.2.1虛擬環境設定 33
4.2.2物件材質圖 34
4.2.3模型孿生 36
4.3點雲與二維彩色影像結合 37
4.3.1影像資料分割 37
4.3.2點雲數據整合及像素配對 39
4.3.3點雲建模 39
第五章 實驗結果 41
5.1 單雙光達掃描模擬結果 41
5.2孿生模型外觀 43
5.3點雲與彩色影像融合結果 46
第六章 結論與未來展望 49
6.1結論 49
6.2未來展望 49
參考文獻 51
參考文獻 [1] 莊晨馨, 垃圾貯坑內之垃圾高度圖建置及其模擬環境建構, 國立中央大學, 資訊與電機工程研究所, 碩士論文(王文俊指導), 2023年
[2] "中華民國環境部-全國一般廢棄物產生量," [Online]. Available: https://data.moenv.gov.tw/dataset/detail/STAT_P_126
[3] "中 華 民 國 內 政 部 戶 政 司 - 人 口 統 計 資 料," [Online]. Available: https://www.ris.gov.tw/app/portal/346.
[4] Z. Wang, Y. OuYang, and O. Kochan, "Bidirectional Linkage Robot Digital Twin System Based on ROS," 2023 17th International Conference on the Experience of Designing and Application of CAD Systems (CADSM), Jaroslaw, Poland, 2023, pp. 1-5
[5] G. Béchu, A. Beugnard, C. G. L. Cao, Q. Perez, C. Urtado, and S. Vauttier, "A software engineering point of view on digital twin architecture," 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), Stuttgart, Germany, 2022, pp. 1-4.
[6] L. Wei, "Research on Digital Twin City Platform Based on Unreal Engine," 2022 International Conference on Information Processing and Network Provisioning (ICIPNP), Beijing, China, 2022, pp. 142-146.
[7] X. Wang, C. Bao, Z. Sun, and X. Wang, "Research on the application of digital twin in aerospace manufacturing based on 3D point cloud," 2022 International Conference on Electronics and Devices, Computational Science (ICEDCS), Marseille, France, 2022, pp. 308-313
[8] A. Kuzminykh, J. Rohde, P. O. Gottschewski-Meyer, and V. Ahlers, "Stereo Vision and LiDAR Based Point Cloud Acquisition for Creating Digital Twins in Indoor Applications," 2023 IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Dortmund, Germany, 2023, pp. 947-952,
[9] A. Köse, A. Tepljakov, and S. Astapov, "Real-time localization and visualization of a sound source for virtual reality applications," 2017 25th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 2017, pp. 1-6
[10] B. -C. -Z. Blaga, and S. Nedevschi, "Employing Simulators for Collision-Free Autonomous UAV Navigation," 2022 IEEE 18th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 2022, pp. 313-318
[11] W. Jansen, N. Huebel, and J. Steckel, "Physical LiDAR Simulation in Real-Time Engine," 2022 IEEE Sensors, Dallas, TX, USA, 2022, pp. 1-4
[12] G. -H. Lin, C. -H. Chang, M. -C. Chung, and Y. -C. Fan, "Self-driving Deep Learning System based on Depth Image Based Rendering and LiDAR Point Cloud," 2020 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), Taoyuan, Taiwan, 2020, pp. 1-2
[13] Q. Song and N. Kubota, "Object Recognition and Spatial Relationship Determination in Point Clouds," 2023 International Conference on Machine Learning and Cybernetics (ICMLC), Adelaide, Australia, 2023, pp. 536-542
[14] L. Liu, X. Yu, W. Wan, H. Yu, and R. Liu, "Rendering of large-scale 3D terrain point cloud based on out-of-core," 2012 International Conference on Audio, Language and Image Processing, Shanghai, China, 2012, pp. 740-744
[15] M. Miknis, R. Davies, P. Plassmann, and A. Ware, "Near real-time point cloud processing using the PCL," 2015 International Conference on Systems, Signals and Image Processing (IWSSIP), London, UK, 2015, pp. 153-156
[16] F. Capraro and S. Milani, "Rendering-Aware Point Cloud Coding for Mixed Reality Devices," 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 2019, pp. 3706-3710
[17] R. Singh, H. Goyal, K. Kumar, G. Arora, and M. Singhal, "A Command-Line Interface Tool for Efficient Git Workflow: “Commandeer”," 2023 3rd International Conference on Technological Advancements in Computational Sciences (ICTACS), Tashkent, Uzbekistan, 2023, pp. 313-318
[18] I. Traore, I. Woungang, Y. Nakkabi, M. S. Obaidat, A. A. E. Ahmed, and B. Khalilian, "Dynamic Sample Size Detection in Learning Command Line Sequence for Continuous Authentication," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 5, pp. 1343-1356, Oct. 2012
[19] E. Pyshkin and A. Kuznetsov, "A Provisioning Service for Automatic Command Line Applications Deployment in Computing Clouds," 2014 IEEE Intl Conf on High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), Paris, France, 2014, pp. 518-521
[20] M. Kumar and W. Shi, "Energy Consumption Analysis of Java Command-line Options," 2019 Tenth International Green and Sustainable Computing Conference (IGSC), Alexandria, VA, USA, 2019, pp. 1-8
[21] L. He, Y. Chen, and J. Zhao, "Automatic Docking Recognition and Location Algorithm of Port Oil Loading Arm Based on 3D Laser Point Cloud," 2020 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 2020, pp. 615-620.
[22] I. Ashraf, S. Hur, and Y. Park, "An Investigation of Interpolation Techniques to Generate 2D Intensity Image From LIDAR Data," in IEEE Access, vol. 5, pp. 8250-8260, 2017.
[23] "G513QE-0031C5900HX,"
[Online].Available : https://rog.asus.com/tw/laptops/rog-strix/2021-rog-strix-g15-series/spec/,2024年6月
[24] " Dynamixel RX-64,"
[Online]. Available : https://emanual.robotis.com/docs/en/dxl/rx/rx-64/. ,2024年6月
[25] "SICK-TiM571-2050101,".
[Online]Available:https://www.sick.com/fi/en/catalog/products/lidar-and-radar-sensors/lidar-sensors/tim/tim571-2050101/p/p412444,2024年6月。
[26] "SOPAS Engineering tool," [Online].Available :
https://www.sick.com/tw/zf/catalog/digital-services-and-solutions/software/sopas-engineering-tool/p/p367244 , 2024年6月
[27] Cloud Compare command line list
[Online]Available :https://www.cloudcompare.org/doc/wiki/index.php/Command_line_mode , 2024年6月
[28] "color class,"
[Online]Available:https://learn.microsoft.com/zhtw/dotnet/api/system.drawing.color?view=net-8.0, 2024年6月
[29] "img class," [Online].Available:
https://learn.microsoft.com/zh-tw/dotnet/api/system.drawing.image?view=dotnet-plat-ext-8.0, 2024年6月
[30] "Delaunay三角化," [Online].Available:
https://baike.baidu.hk/item/Delaunay%E4%B8%89%E8%A7%92%E5%89%96%E5%88%86%E7%AE%97%E6%B3%95/3779918, 2024年6月
[31] "OpenGL," [Online].Available:
https://hackmd.io/@ntust-ossda/rk-d2G-Mh, 2024年6月
指導教授 王文俊(Wen-June Wang) 審核日期 2024-7-30
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明