English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 83776/83776 (100%)
造訪人次 : 58210819      線上人數 : 7172
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/97182


    題名: 基於稀疏點雲和影像資訊的3D場景重建與渲染;3D Scene Reconstruction and Rendering Enhancement from Sparse Point Cloud and Image Data
    作者: 蔡睿芸;Tsai, Jui-Yun
    貢獻者: 人工智慧國際碩士學位學程
    關鍵詞: 同時定位與地圖構建;三維重建;紋理合成;機器人模擬;點雲切割;點雲補全;SLAM;3D Reconstruction;Texture Synthesis;Robot Simulation;Point Cloud Segmentation;Point Cloud Completion
    日期: 2025-07-10
    上傳時間: 2025-10-17 10:55:49 (UTC+8)
    出版者: 國立中央大學
    摘要: 同時定位與地圖構建 (SLAM) 系統通常能夠提供幾何與空間資訊以協助三維場景理解,然而其所產生的地圖 — 特別是來自 LiDAR 的結果 — 往往稀疏、不完整,且不適合直接用於後續重建。傳統的 SLAM 框架雖染專著於定位精度,卻普遍缺乏地圖細化與語意增強的機制。

    本研究提出一套後處理重建流程,能將原始 SLAM 地圖轉換為可用於模擬的完整 3D 環境。該流程透過適應性分割技術將靜態背景與動態物體區分,並進行幾何補全以彌補遮蔽或稀疏觀測造成的缺損,最後搭配視覺輸入完成表面網格重建與貼圖。系統同時針對室內與室外場景設計了專屬的處理策略,包括背景擷取、物體分離與紋理合成等步驟。

    最終重建完成的 3D 場景將整合至具物理模擬能力的模擬器中,可支援機器人導航、場景互動與空間配置驗證等任務。本系統成功彌補傳統 SLAM 幾何資料與高品質場景建模之間的落差,提供一個從真實資料構建細緻且視覺一致之數位環境的實用解決方案。;Simultaneous Localization and Mapping (SLAM) systems typically provide geometric and spatial data for 3D scene understanding. However, the resulting maps—particularly those generated from LiDAR—are often sparse, incomplete, and unsuitable for high-fidelity reconstruction or simulation. While traditional SLAM frameworks focus on localization accuracy, they generally lack mechanisms for map refinement or semantic enhancement.

    In this work, we propose a post-SLAM reconstruction framework that transforms raw SLAM maps into simulation-ready 3D environments. Our approach separates static backgrounds and dynamic objects through adaptive segmentation techniques, applies geometry completion to compensate for occlusions and sparse measurements, and reconstructs surface meshes enriched with textures derived from visual inputs. The framework accommodates both indoor and outdoor scenes by adopting scene-specific strategies for floor extraction, object isolation, and texture synthesis.

    The final 3D scenes are assembled and deployed in a physics-based simulator, enabling tasks such as robot navigation, environment interaction, and layout validation. By bridging the gap between SLAM-based geometry and high-quality 3D scene modeling, our system offers a practical solution for constructing detailed and visually consistent digital environments from real-world data.
    顯示於類別:[人工智慧國際碩士學位學程] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML15檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明