博碩士論文 102322094 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:70 、訪客IP:18.191.5.239
姓名 吳姿璇(Tzy-shyuan Wu)  查詢紙本館藏   畢業系所 土木工程學系
論文名稱 整合RGB-D感測器與單眼數位相機的室內環境點雲模型重建
(Integration of RGB-D Sensor and Digital Single-Lens Reflex Camera for Indoor Point Cloud Model Generation)
相關論文
★ 三維房屋模型實景紋理影像製作與敷貼之研究★ 紋理輔助高解析度衛星影像分析應用於偵測入侵性植物分布之研究
★ 利用高光譜影像偵測外來植物-以恆春地區銀合歡為例★ 以視訊影像進行三維房屋模型實景紋理敷貼之研究
★ 區塊式Level of Detail地景視覺模擬之研究★ 高光譜影像立方體紋理特徵之三維計算
★ 漸變式多重解析度於大型地景視覺模擬之應用★ 區塊式LOD網格細化於大型地形視覺模擬之應用
★ 多層次精緻度三維房屋模型之建置★ 高光譜影像立方體於特徵空間之三維紋理計算
★ 影像修補技術於牆面紋理影像遮蔽去除之應用★ 結合遙測影像與GIS資料以資料挖掘 技術進行崩塌地辨識-以石門水庫集水區為例
★ 利用近景影像提高三維建物模型之細緻化等級★ 以地面及空載光達點雲重建複雜物三維模型
★ 高精緻度房屋模型結合蟻群演算法於室內最佳路徑選擇之應用★ 二次微分法於空載全波形光達之特徵萃取與地物分類
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來室內建模技術快速發展,以攝影測量而言,傳統主流方式是利用多張高解析度影像進行特徵萃取及匹配,建構空間中三維點雲及模型,然而影像式三維重建在特徵不足的室內環境中將無法獲取足夠的空間點雲資料。RGB-D 感測器由於可以同時獲取彩色影像及每個像元的深度資訊,即使在特徵不足區域也能有相對應之點雲資料,因此在電腦視覺領域中逐漸成為一新興發展的室內測繪工具。但其缺點是資料獲取上有範圍限制,且影像解析度較低。進行室內測繪時,不論是以影像的方式或RGB-D直接獲取場景資料,皆有其特長及缺陷,因此本研究期望發展一套整合RGB-D感測器及單眼數位相機的測繪系統及流程,整合兩種儀器的優點相輔相成,建構出完善的室內三維點雲模型。



本研究使用微軟所開發的Kinect作為RGB-D感測器的測試儀器,整體程序主要分三大項目:(1) 透過運動探知結構 (Structure from Motion, SfM) 演算方式,將所獲取的彩色影像重建拍攝當時的相機位置及參數,藉由單眼數位相機提供的高解析度影像,提高影像交會解算精度; (2) 利用基於多視角立體(Clustering Views for Multi-view Stereo, CMVS)理論所開發的軟體套件重建場景之稠密性匹配點雲模型; (3) 根據解算時所萃取的特徵點坐標,以隨機抽樣一致算法 (RANdom SAmple Consensus, RANSAC) 篩選特徵點,運用三維的相似轉換,將Kinect所獲取之每幅點雲資料與重建的密匹配模型整合至同一坐標系統當中。研究的實驗成果顯示,利用本研究所開發的整合系統流程所建構出的室內點雲模型,縱使在無特徵處,也能擁有完善的點雲資訊;而RANSAC的篩選程序,能有效改善轉換參數成果精度並穩定最終整合點雲模型的品質。

摘要(英) Three-dimensional (3D) modeling of indoor environment has been extensively developed in recent years. In photogrammetry, one of the traditional mainstream solutions for indoor mapping and modeling is to create 3D point cloud model from multiple images. However, the major drawback of image-based approaches is the lack of points extracted in featureless areas. RGB-D sensors, which capture both RGB images and per-pixel depth information, recently became a popular indoor mapping tool in the field of computer vision. The shortages of RGB-D sensors are low resolution of the image and the limitation of range. Indoor mapping based on images or RGB-D information, have their own properties and limitations. Therefore, this research aims to develop an indoor mapping procedure, combining these two devices to overcome the shortcomings from each other, and to create a uniformly distributed point cloud of indoor environments.



This study uses Microsoft Kinect as RGB-D sensor in experiments. There are three main steps in the proposed procedure: (1) Structure from Motion (SfM) method is used to reconstruct the camera position and parameters from multiple color images. High resolution images captured by DSLR cameras can provide more accurate ray intersection condition. (2) Using the software based on Clustering Views for Multi-view Stereo (CMVS) method to construct a dense matching point clouds. (3) According to feature point extracted in SfM reconstruction, using Random Sample Consensus (RANSAC) method to select the feature points. Then, transfer the Kinect point clouds to the same coordinate as the dense matching point clouds via 3D Similarity transformation. Experimental results demonstrate the proposed data processing procedure can generate dense and fully colored point clouds of indoor environments even in featureless places. In addition, the feature point selection approach can improve the accuracy of the obtained parameters and ensure the quality of final point cloud model results.

關鍵字(中) ★ 點雲模型
★ Kinect
★ 運動探知結構
★ 隨機抽樣一致
★ 三維相似轉換
關鍵字(英) ★ Point cloud model
★ Kinect
★ Structure from Motion
★ RANSAC
★ 3D similarity transformation
論文目次 摘要 I

ABSTRACT III

致謝 V

目錄 VI

圖目錄 IX

表目錄 XIII

第一章 緒論 1

1-1 研究背景 1

1-2 研究動機與目的 3

1-3 論文架構 5

第二章 文獻回顧 6

2-1 影像式三維建模 7

2-1-1 特徵點萃取及匹配 7

2-1-2 重建三維結構和相機關係 10

2-1-3 三維重建點雲:多視角三維重建技術 12

2-2 三維資訊獲取與室內建模 16

2-2-1 RGB-D感測器 16

2-2-2 RGB-D室內建模 21

第三章 研究方法與步驟 26

3-1 研究方法綜述 26

3-2 研究資料獲取及前處理 28

3-2-1 Microsoft Kinect 28

3-2-2 單眼數位相機 31

3-3 影像重建:Visual SFM 32

3-4 點雲整合 37

3-4-1 三維相似轉換及流程 38

3-4-2 特徵點篩選 41

3-5 點雲模型展示 44

第四章 實驗成果與分析 45

4-1 實驗介紹 45

4-1-1 實驗環境 45

4-1-2 實驗設置 47

4-2 Kinect獲取資料品質評估 53

4-2-1 深度值穩定度評估 53

4-2-2 面擬合評估 57

4-3 實驗成果 61

4-3-1 點雲模型成果 61

4-3-2 點雲精度評估 72

4-3-3 成果分析 79

第五章 結論與建議 90

參考文獻 93

參考文獻 洪祥恩,2011,以地面及空載光達點雲重建複雜建物三維模型,碩士論文,國立中央大學土木工程學系。



孫敏,2007,多視幾何與傳統攝影測量理論,北京大學學報(自然科學版),第四十三卷,第四期,頁453-459。



陳思翰,2011,未校正影像三維模型建構與定位精度之研究,碩士論文,國立台北大學不動產與城鄉環境學系。



張桓、蔡富安,2014,單視角影像滅點偵測與三維建物模型重建,航測及遙測學刊,第十八卷,第四期,頁 217-233。



張萌,2013,基於建築物三維點雲數據的水平面檢測,碩士論文,西安電子科技大學通信與信息系統學系。



黃金聰、陳思翰,2013,利用多重影像產生之點雲的精度評估,台灣土地研究,第十六卷,第一期,頁 81-101。



趙煇,2006,SIFT特徵匹配技術講義,山東大學信息學院。



Bouguet, J. Y., 2013. Camera Calibration Toolbox for Matlab, Retrieved June 30, 2015, from http://www.vision.caltech.edu/bouguetj/calib_doc/



Besl, P. J. and McKay, N. D., 1992. A method for registration of 3-D shapes. IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.14, no.2, pp.239–256.



Chow, J. and Lichti, D., 2013. Photogrammetric bundle adjustment with self-calibration of the PrimeSense 3D camera technology: Microsoft Kinect, IEEE Access, vol.1, pp. 465-474.



Fischler, M. A., Bolles, R. C., 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Comm. of the ACM, vol. 24, pp.381-395.



Furukawa, Y. and Ponce, J., 2010. Accurate, dense, and robust multi-view stereopsis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 8, pp. 1362-1376.



Han, J., Shao, L., Xu, D., and Shotton, J., 2013. Enhanced computer vision with Microsoft Kinect sensor: A review, IEEE Transactions on Cybernetics, pp.1318-1334.



Henry, P., Krainin, M., Herbst, E., Ren, X., and Fox, D., 2012. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments, The International Journal of Robotics Research, vol. 31, no. 5, pp. 647-663.



Khoshelham, K. and Elberink, S., 2012. Accuracy and resolution of Kinect depth data for indoor mapping applications, Sensors, vol.12, no.1, pp.1437-1454.



Lowe, D. G., 1999. Object recognition from local scale-invariant features, International Conference on Computer Vision, Corfu, Greece, pp.1150-1157.



Mankoff, K. D., & Russo, T. A. 2013. The Kinect: A low‐cost, high‐resolution, short‐range 3D camera, Earth Surface Processes and Landforms, vol.38, no.9, pp.926-936.



Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., Kohli, P., Shotton, J., Hodges, S., and Fitzgibbon, A., 2011. KinectFusion: Real-time dense surface mapping and tracking, In Mixed and augmented reality (ISMAR), 2011 10th IEEE international symposium on, pp.127-136.



Wolf, P. R., and Dewitt, B. A., 2000. Element of photogrammrtry with application in GIS, McGraw Hill press, 3rd edition.



Wu, C., 2011. VisualSFM: A Visual Structure from Motion System, Retrieved August 20, 2014, from http://ccwu.me/vsfm/



Wu, C., Agarwal, S., Curless, B., and Seitz, S. M., 2011. Multicore bundle adjustment, Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp.3057–3064.



Wu, C., 2013. Towards linear-time incremental structure from motion, 3D Vision-3DV 2013, 2013 International Conference on, pp.127-134.



Wasmeier, P., 2014. Geodetic Transformation Toolbox, Retrieved June 24, 2014, from http://www.mathworks.com/matlabcentral/fileexchange/9696-geodetic-

transformations-toolbox



Zhou, K., 2010. Structure & Motion, Structure in Pattern Recognition, Vienna University of Technology, Faculty of Informatics, Institute of Computer Graphics and Algorithms, Pattern Recognition and Image Processing Group.

指導教授 蔡富安 審核日期 2015-8-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明