本研究使用微軟所開發的Kinect作為RGB-D感測器的測試儀器,整體程序主要分三大項目:(1) 透過運動探知結構 (Structure from Motion, SfM) 演算方式,將所獲取的彩色影像重建拍攝當時的相機位置及參數,藉由單眼數位相機提供的高解析度影像,提高影像交會解算精度; (2) 利用基於多視角立體(Clustering Views for Multi-view Stereo, CMVS)理論所開發的軟體套件重建場景之稠密性匹配點雲模型; (3) 根據解算時所萃取的特徵點坐標,以隨機抽樣一致算法 (RANdom SAmple Consensus, RANSAC) 篩選特徵點,運用三維的相似轉換,將Kinect所獲取之每幅點雲資料與重建的密匹配模型整合至同一坐標系統當中。研究的實驗成果顯示,利用本研究所開發的整合系統流程所建構出的室內點雲模型,縱使在無特徵處,也能擁有完善的點雲資訊;而RANSAC的篩選程序,能有效改善轉換參數成果精度並穩定最終整合點雲模型的品質。 ;Three-dimensional (3D) modeling of indoor environment has been extensively developed in recent years. In photogrammetry, one of the traditional mainstream solutions for indoor mapping and modeling is to create 3D point cloud model from multiple images. However, the major drawback of image-based approaches is the lack of points extracted in featureless areas. RGB-D sensors, which capture both RGB images and per-pixel depth information, recently became a popular indoor mapping tool in the field of computer vision. The shortages of RGB-D sensors are low resolution of the image and the limitation of range. Indoor mapping based on images or RGB-D information, have their own properties and limitations. Therefore, this research aims to develop an indoor mapping procedure, combining these two devices to overcome the shortcomings from each other, and to create a uniformly distributed point cloud of indoor environments.
This study uses Microsoft Kinect as RGB-D sensor in experiments. There are three main steps in the proposed procedure: (1) Structure from Motion (SfM) method is used to reconstruct the camera position and parameters from multiple color images. High resolution images captured by DSLR cameras can provide more accurate ray intersection condition. (2) Using the software based on Clustering Views for Multi-view Stereo (CMVS) method to construct a dense matching point clouds. (3) According to feature point extracted in SfM reconstruction, using Random Sample Consensus (RANSAC) method to select the feature points. Then, transfer the Kinect point clouds to the same coordinate as the dense matching point clouds via 3D Similarity transformation. Experimental results demonstrate the proposed data processing procedure can generate dense and fully colored point clouds of indoor environments even in featureless places. In addition, the feature point selection approach can improve the accuracy of the obtained parameters and ensure the quality of final point cloud model results.