本研究提出一套後處理重建流程,能將原始 SLAM 地圖轉換為可用於模擬的完整 3D 環境。該流程透過適應性分割技術將靜態背景與動態物體區分,並進行幾何補全以彌補遮蔽或稀疏觀測造成的缺損,最後搭配視覺輸入完成表面網格重建與貼圖。系統同時針對室內與室外場景設計了專屬的處理策略,包括背景擷取、物體分離與紋理合成等步驟。
最終重建完成的 3D 場景將整合至具物理模擬能力的模擬器中,可支援機器人導航、場景互動與空間配置驗證等任務。本系統成功彌補傳統 SLAM 幾何資料與高品質場景建模之間的落差,提供一個從真實資料構建細緻且視覺一致之數位環境的實用解決方案。;Simultaneous Localization and Mapping (SLAM) systems typically provide geometric and spatial data for 3D scene understanding. However, the resulting maps—particularly those generated from LiDAR—are often sparse, incomplete, and unsuitable for high-fidelity reconstruction or simulation. While traditional SLAM frameworks focus on localization accuracy, they generally lack mechanisms for map refinement or semantic enhancement.
In this work, we propose a post-SLAM reconstruction framework that transforms raw SLAM maps into simulation-ready 3D environments. Our approach separates static backgrounds and dynamic objects through adaptive segmentation techniques, applies geometry completion to compensate for occlusions and sparse measurements, and reconstructs surface meshes enriched with textures derived from visual inputs. The framework accommodates both indoor and outdoor scenes by adopting scene-specific strategies for floor extraction, object isolation, and texture synthesis.
The final 3D scenes are assembled and deployed in a physics-based simulator, enabling tasks such as robot navigation, environment interaction, and layout validation. By bridging the gap between SLAM-based geometry and high-quality 3D scene modeling, our system offers a practical solution for constructing detailed and visually consistent digital environments from real-world data.