中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/97182
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 83776/83776 (100%)
Visitors : 60792333      Online Users : 761
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: https://ir.lib.ncu.edu.tw/handle/987654321/97182


    Title: 基於稀疏點雲和影像資訊的3D場景重建與渲染;3D Scene Reconstruction and Rendering Enhancement from Sparse Point Cloud and Image Data
    Authors: 蔡睿芸;Tsai, Jui-Yun
    Contributors: 人工智慧國際碩士學位學程
    Keywords: 同時定位與地圖構建;三維重建;紋理合成;機器人模擬;點雲切割;點雲補全;SLAM;3D Reconstruction;Texture Synthesis;Robot Simulation;Point Cloud Segmentation;Point Cloud Completion
    Date: 2025-07-10
    Issue Date: 2025-10-17 10:55:49 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 同時定位與地圖構建 (SLAM) 系統通常能夠提供幾何與空間資訊以協助三維場景理解,然而其所產生的地圖 — 特別是來自 LiDAR 的結果 — 往往稀疏、不完整,且不適合直接用於後續重建。傳統的 SLAM 框架雖染專著於定位精度,卻普遍缺乏地圖細化與語意增強的機制。

    本研究提出一套後處理重建流程,能將原始 SLAM 地圖轉換為可用於模擬的完整 3D 環境。該流程透過適應性分割技術將靜態背景與動態物體區分,並進行幾何補全以彌補遮蔽或稀疏觀測造成的缺損,最後搭配視覺輸入完成表面網格重建與貼圖。系統同時針對室內與室外場景設計了專屬的處理策略,包括背景擷取、物體分離與紋理合成等步驟。

    最終重建完成的 3D 場景將整合至具物理模擬能力的模擬器中,可支援機器人導航、場景互動與空間配置驗證等任務。本系統成功彌補傳統 SLAM 幾何資料與高品質場景建模之間的落差,提供一個從真實資料構建細緻且視覺一致之數位環境的實用解決方案。;Simultaneous Localization and Mapping (SLAM) systems typically provide geometric and spatial data for 3D scene understanding. However, the resulting maps—particularly those generated from LiDAR—are often sparse, incomplete, and unsuitable for high-fidelity reconstruction or simulation. While traditional SLAM frameworks focus on localization accuracy, they generally lack mechanisms for map refinement or semantic enhancement.

    In this work, we propose a post-SLAM reconstruction framework that transforms raw SLAM maps into simulation-ready 3D environments. Our approach separates static backgrounds and dynamic objects through adaptive segmentation techniques, applies geometry completion to compensate for occlusions and sparse measurements, and reconstructs surface meshes enriched with textures derived from visual inputs. The framework accommodates both indoor and outdoor scenes by adopting scene-specific strategies for floor extraction, object isolation, and texture synthesis.

    The final 3D scenes are assembled and deployed in a physics-based simulator, enabling tasks such as robot navigation, environment interaction, and layout validation. By bridging the gap between SLAM-based geometry and high-quality 3D scene modeling, our system offers a practical solution for constructing detailed and visually consistent digital environments from real-world data.
    Appears in Collections:[ International Graduate Program in Artificial Intelligence ] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML63View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明