中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95773
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 42581703      在线人数 : 1061
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95773


    题名: 基於雙光達架構之垃圾貯坑點雲資料建立與3D建模
    作者: 易彥廷;Yi, Yian-Ting
    贡献者: 電機工程學系
    关键词: 點雲處理;馬達控制;座標轉換;二維光達;感測融合;Unreal Engine;motor control;point cloud processing;coordinate transformation;2D LiDAR;sensor fusion;Unreal Engine
    日期: 2024-07-30
    上传时间: 2024-10-09 17:15:52 (UTC+8)
    出版者: 國立中央大學
    摘要: 本論文主旨為設計一架設有兩台二維光達(或稱LiDAR)的垃圾貯坑,對貯坑內的垃圾進行掃描,在得到點雲於空間座標中的數據後,對該點雲建模並將建立完成的模型同步呈現於Unreal Engine架設的虛擬環境中呈現。
    本論文的研究重點共有兩大部分,分別為利用點雲建立垃圾貯坑模型和結合二維影像與點雲建立貯坑之彩色模型,場域架設的方式參考本實驗室之論文[1]中設計之實驗場域並加以改裝,將論文中的單光達架構取代為雙光達同步掃描,藉由增加掃描的座標點數量來提高點雲的還原度。實現點雲建模的步驟如下:首先操控馬達配合皮帶運送架有兩台光達之木製平台對貯坑內的物體進行掃描,掃描後兩台光達便會回傳各自的點雲極座標數據,為了使點雲可呈現於三維空間,我們將得到的極座標轉換為3D空間座標,並且針對兩台光達之間的位置差距,分別將兩份點雲數據做位置校正使兩個光達產生的點雲可以呈現於同一座標系之下,最後將點雲中錯誤的座標點過濾掉並以批次檔指令匯入建模軟體Cloud Compare中進行建模,配合Unreal Engine建立之虛擬環境使該模型具備高度圖的效果;而影像與點雲的結合旨在提高模型的擬真程度,將各像素中的色彩資訊賦予點雲中的各點,使點雲除了具備(x,y,z)的三維空間數據之外還同時擁有影像中各像素的RGB資料,最後將此彩色點雲同步匯入Unreal Engine之中呈現,並且也同時以建模軟體將此點雲建為彩色模型。
    為了提高研究的效率及使用者的使用體驗,研究過程完全在Windows環境下完成,以程式語言C#為主體將各項硬體的操作程序依序呼叫進入主執行程序之中執行,並且資料的傳輸及接收也同樣在Windows的環境下完成。
    ;The main purpose of this thesis is to design a garbage pit equipped with two 2D LiDARs to scan the garbage inside the pit. After obtaining the point cloud data in spatial coordinates, the point cloud is modeled, and the completed model is synchronously presented in a virtual environment built with Unreal Engine.
    The research in this thesis focuses on two main aspects: using point clouds to establish a model of the garbage pit and combining 2D images with point clouds to create a colored model of the pit. The setup method of the field refers to the experimental field designed in the laboratory′s thesis [1] and is modified by replacing the single LiDAR structure with a dual LiDAR synchronous scanning system. This increases the number of scanned coordinate points to improve the resolution of the point cloud. The steps to achieve point cloud modeling are as follows: first, the motor is controlled to operate a wooden platform with two LiDARs to scan the objects in the pit. After scanning, the two LiDARs return their respective point cloud polar coordinate data. To present the point cloud in three-dimensional space, we convert the obtained polar coordinates to 3D spatial coordinates. Considering the positional differences between the two LiDARs, we adjust the two sets of point cloud data to the same coordinate system. Finally, erroneous coordinate points in the point cloud are filtered out and imported into the modeling software Cloud Compare using batch file commands for modeling. The virtual environment established with Unreal Engine provides a topographic effect for the model. The combination of images and point clouds aims to improve the realism of the model by assigning the color information of each pixel to the points in the point cloud. This allows the point cloud to have not only (x, y, z) spatial data but also the RGB data of each pixel in the image. Finally, the colored point cloud is synchronously imported into Unreal Engine for presentation and modeled as a colored model using modeling software.
    To improve research efficiency and user experience, the research process is entirely completed in a Windows environment. The main execution program is written in C#, sequentially calling the operation procedures of various hardware, and data transmission and reception are also completed in the Windows environment.
    显示于类别:[電機工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML20检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明