本論文中,探討如何透過彩色相機拍攝的彩色影像,配合深度感測技術拍攝的深度影像生成點雲(Point Cloud),藉此還原室內三維場景分布,並建立自動化室內模型重建流程。在深度感測技術方面,使用飛時測距(Time of Flight , ToF)進行距離資料的獲取,藉此計算空間中點雲座標,來表示室內各物件之位置,並透過點雲密度分布關係,濾除飛時測距誤差所產生的點雲雜訊點。接著透過判別彩色影像當中的特徵點,在一系列的掃描圖像當中,計算前後兩個畫面的相機運動關係,藉此重建出室內全場域點雲分布圖。 在自動化建立模型方面,利用邊界選取、區域成長和法向量的密度分類等技術,萃取出室內牆面點雲資訊,並利用最小平方法進行平面方程式的擬合,最後進行室內牆面的模型建立。透過深度學習的語義分割,將室內點雲進行物件名稱的標記,並依其名稱進行分類和圖形檔案格式輸出,完成自動化建立模型流程。 ;In this thesis, we study to build up an auto-modeling calculation model for a precise3D indoor map based on point cloud generated by color cameras incorporated with depth sensor. In depth sensing, we adopt time of flight technology to catch depth information so that the 3D coordinates of the sensing points can be used to build up the 3D object. The point cloud density is applied to filter out the noise of the point cloud by the error of the time of flight measurement. Then through feature capturing between adjacent pictures, we can calculate the movement of the image capture so that we can build up the point cloud for the whole field. In auto-modeling, we adopt boundary selection, region growing, and density classification of the normal vector to extract the indoor point cloud information. Then we apply the least square method to generate plane equations, and the model of the surrounding walls can be generated. Finally, through the semantic segmentation of deep learning, the indoor point cloud is marked with the object name and is classified according to its name to form a graphic file format for auto-modeling. Keywords: Depth sensing, Spatial point cloud, Point cloud reconstruction, Object classification, Semantic segmentation, Automated modeling