English  |  正體中文  |  简体中文  |  Items with full text/Total items : 73032/73032 (100%)
Visitors : 23386579      Online Users : 436
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version

    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/68046

    Title: 整合RGB-D感測器與單眼數位相機的室內環境點雲模型重建;Integration of RGB-D Sensor and Digital Single-Lens Reflex Camera for Indoor Point Cloud Model Generation
    Authors: 吳姿璇;Wu,Tzy-shyuan
    Contributors: 土木工程學系
    Keywords: 點雲模型;Kinect;運動探知結構;隨機抽樣一致;三維相似轉換;Point cloud model;Kinect;Structure from Motion;RANSAC;3D similarity transformation
    Date: 2015-08-26
    Issue Date: 2015-09-23 10:15:29 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 近年來室內建模技術快速發展,以攝影測量而言,傳統主流方式是利用多張高解析度影像進行特徵萃取及匹配,建構空間中三維點雲及模型,然而影像式三維重建在特徵不足的室內環境中將無法獲取足夠的空間點雲資料。RGB-D 感測器由於可以同時獲取彩色影像及每個像元的深度資訊,即使在特徵不足區域也能有相對應之點雲資料,因此在電腦視覺領域中逐漸成為一新興發展的室內測繪工具。但其缺點是資料獲取上有範圍限制,且影像解析度較低。進行室內測繪時,不論是以影像的方式或RGB-D直接獲取場景資料,皆有其特長及缺陷,因此本研究期望發展一套整合RGB-D感測器及單眼數位相機的測繪系統及流程,整合兩種儀器的優點相輔相成,建構出完善的室內三維點雲模型。

    本研究使用微軟所開發的Kinect作為RGB-D感測器的測試儀器,整體程序主要分三大項目:(1) 透過運動探知結構 (Structure from Motion, SfM) 演算方式,將所獲取的彩色影像重建拍攝當時的相機位置及參數,藉由單眼數位相機提供的高解析度影像,提高影像交會解算精度; (2) 利用基於多視角立體(Clustering Views for Multi-view Stereo, CMVS)理論所開發的軟體套件重建場景之稠密性匹配點雲模型; (3) 根據解算時所萃取的特徵點坐標,以隨機抽樣一致算法 (RANdom SAmple Consensus, RANSAC) 篩選特徵點,運用三維的相似轉換,將Kinect所獲取之每幅點雲資料與重建的密匹配模型整合至同一坐標系統當中。研究的實驗成果顯示,利用本研究所開發的整合系統流程所建構出的室內點雲模型,縱使在無特徵處,也能擁有完善的點雲資訊;而RANSAC的篩選程序,能有效改善轉換參數成果精度並穩定最終整合點雲模型的品質。
    ;Three-dimensional (3D) modeling of indoor environment has been extensively developed in recent years. In photogrammetry, one of the traditional mainstream solutions for indoor mapping and modeling is to create 3D point cloud model from multiple images. However, the major drawback of image-based approaches is the lack of points extracted in featureless areas. RGB-D sensors, which capture both RGB images and per-pixel depth information, recently became a popular indoor mapping tool in the field of computer vision. The shortages of RGB-D sensors are low resolution of the image and the limitation of range. Indoor mapping based on images or RGB-D information, have their own properties and limitations. Therefore, this research aims to develop an indoor mapping procedure, combining these two devices to overcome the shortcomings from each other, and to create a uniformly distributed point cloud of indoor environments.

    This study uses Microsoft Kinect as RGB-D sensor in experiments. There are three main steps in the proposed procedure: (1) Structure from Motion (SfM) method is used to reconstruct the camera position and parameters from multiple color images. High resolution images captured by DSLR cameras can provide more accurate ray intersection condition. (2) Using the software based on Clustering Views for Multi-view Stereo (CMVS) method to construct a dense matching point clouds. (3) According to feature point extracted in SfM reconstruction, using Random Sample Consensus (RANSAC) method to select the feature points. Then, transfer the Kinect point clouds to the same coordinate as the dense matching point clouds via 3D Similarity transformation. Experimental results demonstrate the proposed data processing procedure can generate dense and fully colored point clouds of indoor environments even in featureless places. In addition, the feature point selection approach can improve the accuracy of the obtained parameters and ensure the quality of final point cloud model results.
    Appears in Collections:[土木工程研究所] 博碩士論文

    Files in This Item:

    File Description SizeFormat

    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明