English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41634426      線上人數 : 2683
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/68046


    題名: 整合RGB-D感測器與單眼數位相機的室內環境點雲模型重建;Integration of RGB-D Sensor and Digital Single-Lens Reflex Camera for Indoor Point Cloud Model Generation
    作者: 吳姿璇;Wu,Tzy-shyuan
    貢獻者: 土木工程學系
    關鍵詞: 點雲模型;Kinect;運動探知結構;隨機抽樣一致;三維相似轉換;Point cloud model;Kinect;Structure from Motion;RANSAC;3D similarity transformation
    日期: 2015-08-26
    上傳時間: 2015-09-23 10:15:29 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來室內建模技術快速發展,以攝影測量而言,傳統主流方式是利用多張高解析度影像進行特徵萃取及匹配,建構空間中三維點雲及模型,然而影像式三維重建在特徵不足的室內環境中將無法獲取足夠的空間點雲資料。RGB-D 感測器由於可以同時獲取彩色影像及每個像元的深度資訊,即使在特徵不足區域也能有相對應之點雲資料,因此在電腦視覺領域中逐漸成為一新興發展的室內測繪工具。但其缺點是資料獲取上有範圍限制,且影像解析度較低。進行室內測繪時,不論是以影像的方式或RGB-D直接獲取場景資料,皆有其特長及缺陷,因此本研究期望發展一套整合RGB-D感測器及單眼數位相機的測繪系統及流程,整合兩種儀器的優點相輔相成,建構出完善的室內三維點雲模型。

    本研究使用微軟所開發的Kinect作為RGB-D感測器的測試儀器,整體程序主要分三大項目:(1) 透過運動探知結構 (Structure from Motion, SfM) 演算方式,將所獲取的彩色影像重建拍攝當時的相機位置及參數,藉由單眼數位相機提供的高解析度影像,提高影像交會解算精度; (2) 利用基於多視角立體(Clustering Views for Multi-view Stereo, CMVS)理論所開發的軟體套件重建場景之稠密性匹配點雲模型; (3) 根據解算時所萃取的特徵點坐標,以隨機抽樣一致算法 (RANdom SAmple Consensus, RANSAC) 篩選特徵點,運用三維的相似轉換,將Kinect所獲取之每幅點雲資料與重建的密匹配模型整合至同一坐標系統當中。研究的實驗成果顯示,利用本研究所開發的整合系統流程所建構出的室內點雲模型,縱使在無特徵處,也能擁有完善的點雲資訊;而RANSAC的篩選程序,能有效改善轉換參數成果精度並穩定最終整合點雲模型的品質。
    ;Three-dimensional (3D) modeling of indoor environment has been extensively developed in recent years. In photogrammetry, one of the traditional mainstream solutions for indoor mapping and modeling is to create 3D point cloud model from multiple images. However, the major drawback of image-based approaches is the lack of points extracted in featureless areas. RGB-D sensors, which capture both RGB images and per-pixel depth information, recently became a popular indoor mapping tool in the field of computer vision. The shortages of RGB-D sensors are low resolution of the image and the limitation of range. Indoor mapping based on images or RGB-D information, have their own properties and limitations. Therefore, this research aims to develop an indoor mapping procedure, combining these two devices to overcome the shortcomings from each other, and to create a uniformly distributed point cloud of indoor environments.

    This study uses Microsoft Kinect as RGB-D sensor in experiments. There are three main steps in the proposed procedure: (1) Structure from Motion (SfM) method is used to reconstruct the camera position and parameters from multiple color images. High resolution images captured by DSLR cameras can provide more accurate ray intersection condition. (2) Using the software based on Clustering Views for Multi-view Stereo (CMVS) method to construct a dense matching point clouds. (3) According to feature point extracted in SfM reconstruction, using Random Sample Consensus (RANSAC) method to select the feature points. Then, transfer the Kinect point clouds to the same coordinate as the dense matching point clouds via 3D Similarity transformation. Experimental results demonstrate the proposed data processing procedure can generate dense and fully colored point clouds of indoor environments even in featureless places. In addition, the feature point selection approach can improve the accuracy of the obtained parameters and ensure the quality of final point cloud model results.
    顯示於類別:[土木工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML1141檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明