中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/75940
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42415388      線上人數 : 1648
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/75940


    題名: 使用多個RGB-D攝影機實現三維物件重建;Reconstruction of Three-Dimensional Objects Using A Network of RGB-D Sensors
    作者: 華得尼;Kumara, W.G.C.W.
    貢獻者: 資訊工程學系
    關鍵詞: 3D;Reconstruction;ICP;Noise removal;3D inpainting;3D video;3D;Kinect;Reconstruction;ICP;Noise removal;3D inpainting;3D video
    日期: 2018-01-29
    上傳時間: 2018-04-13 11:23:09 (UTC+8)
    出版者: 國立中央大學
    摘要: 使用RGB-D信息的三維模型重建技術近幾十年來一直受到世界各地研究人員的高度關注。RGB-D傳感器由彩色攝像機,紅外線(IR)發射器和接收器組成。應用紅外線,RGB-D傳感器可以獲取場景的彩色影像和深度影像。深度影像提供到場景中每個點的與攝影機間距離。RGB-D傳感器由於其提供顏色和深度信息的能力而廣泛用於計算機視覺,計算機圖形學和人機交互等許多研究領域。

    本文提出了對RGB-D傳感器網路中的影像資訊進行校準的研究結果,來重建一個物體的三維模型。我們使用了藉由無線網路互連的RGB-D傳感器網路。使用傳感器網路的原因是為了捕捉人的現場姿勢,因為我們無法使用單個傳感器捕捉人所有方向的姿勢。由每個傳感器捕獲的高位元速率資料流先在集中式PC處集中並進行處理。這甚至可以擴展到網際網路上的遠端電腦。

    然後從RGB-D資訊生成點雲。點雲是一組分散的三維點,代表捕捉的物體的表面結構和顏色。然後多個傳感器產生的多個點雲彼此對齊以創建三維模型。迭代最近點(ICP)算法的修改版本是為此目的而引入的。

    獲取的點雲可能包含許多雜訊,這是因為攝影機本身失真或是其他紅外線發射器所造成的干擾,而因為物體表面特性造成紅外線反射的狀況也會影響攝影機捕捉點雲的精確性,在這裡我們使用了兩種雜訊去除的演算法來消除點雲中的雜訊:基於距離以及基於密度的自適應去雜訊演算法。基於距離的去除雜訊演算法是在校準點雲之前執行的,在校準之後則是使用基於自適應密度的演算法。

    彩色影像的解析度遠遠高於大多數的RGB-D攝影機的深度圖像。由於使用深度影像成點雲,因此點雲中的點雲數量取決於深度圖像的解析度。在引入新的三維點雲演算法後,利用高解析度彩色影像的優點去增加點雲的數量。

    在點雲圖中的表面上可能包含不同大小的破孔,為了解決這些破孔,首先需要定位破孔的位置並加以填補,但由於攝影機無法捕捉到物體的背光面,或者因為輪廓而產生的遮罩,點雲生成的模型會產生較大的破孔,因此提出了一種基於二維修補的三維修復演算法來填補點雲中的大孔,最後再以明確的點雲來重建物體的表面。

    在這裡我們進行了兩種實驗,在第一種實驗中,我們使用八個KinectV2攝影機作為RGB-D的攝影機,並進行上半身人物模型的獲取和重建。在第二個實驗中,則使用Intel RealSense SR300攝影機作為RGB-D攝影機,來捕捉在台灣被稱作布袋戲的戲劇人偶的表面,而實驗結果表明所提出的方法能夠產生一個更好的三維模型。
    ;3D model reconstruction techniques using RGB-D information have been gaining a great attention of the researchers around the world in recent decades. RGB-D sensor is consisted of a color camera, infrared (IR) emitter and receiver. Hence, a RGB-D sensor can capture both a color image and depth image of a scene. Depth image provides the distance to each point in scene. RGB-D sensors are widely used in many research fields, such as in computer vision, computer graphics, and human computer interaction, due to its capacity of providing color and depth information.

    This dissertation presents research findings on calibrating information captured from a network of RGB-D sensors in order to reconstruct a 3D model of an object. We used a network of RGB-D sensors, which are interconnected in a network. The reason to use a network of sensors was to capture live gestures of a human as we cannot capture gestures from all views around the human using a single sensor. High bit rate streams captured by each sensor are first collected at a centralized PC for the processing. This even can be extended to a remote PC in the Internet.

    Point clouds are then generated from the RGB-D information. Point clouds are a set of scattered 3D points which represents the surface structure and the color of the object captured. Multiple point clouds generated from multiple sensors are then aligned with each other to create a 3D object. A modified version of the Iterative Closest Point (ICP) algorithm is introduced for this purpose.

    Captured point clouds may contain noise due to several reasons such as, inherent camera distortions, interference from infrared field of other sensors, and inaccurate infrared reflection due to object surface properties. Two noise removal algorithms are introduced to get rid of such noise in the point clouds namely adaptive distance-based noise removal and adaptive density-based noise removal algorithm. Adaptive distance-based noise removal is performed before the alignment of the point clouds and the adaptive density-based noise removal is performed after the alignment.

    Resolution of the color image is much higher than the depth image of most RGB-D sensors. Point clouds are generated using the depth information and hence, number of points in the point clouds depends on the resolution of the depth image. A new algorithm for 3D super resolution of the point clouds is introduced in order to increase the number of points in the point clouds using the advantage of the higher resolution color image.

    Point clouds may contain small and large holes in the surface. Small holes are first located and three small hole filling mechanisms are introduced. As the camera does not capture not facing surfaces to the camera and the surfaces behind another object which is referred as occlusion, large holes are created. A 3D inpainting algorithm based on 2D inpainting is proposed to fill the large holes in the point clouds. Finally, a surface is reconstructed using the point clouds clearly representing the captured 3D object.

    Two experiments were performed. In the first experiment 8 Microsoft Kinect version 2 sensors were used as the RGB-D sensor and human busts were captured and reconstructed. In the second experiment one Intel Realsense SR300 sensor was used as the RGB-D sensor to capture and reconstruct the surface of a type of puppets in Taiwan called Budaixi. Experimental results demonstrate that the proposed methods generate a better 3D model of the object captured.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML351檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明