博碩士論文 101582605 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator華得尼zh_TW
DC.creatorW.G.C.W. Kumaraen_US
dc.date.accessioned2018-1-29T07:39:07Z
dc.date.available2018-1-29T07:39:07Z
dc.date.issued2018
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=101582605
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract使用RGB-D信息的三維模型重建技術近幾十年來一直受到世界各地研究人員的高度關注。RGB-D傳感器由彩色攝像機,紅外線(IR)發射器和接收器組成。應用紅外線,RGB-D傳感器可以獲取場景的彩色影像和深度影像。深度影像提供到場景中每個點的與攝影機間距離。RGB-D傳感器由於其提供顏色和深度信息的能力而廣泛用於計算機視覺,計算機圖形學和人機交互等許多研究領域。 本文提出了對RGB-D傳感器網路中的影像資訊進行校準的研究結果,來重建一個物體的三維模型。我們使用了藉由無線網路互連的RGB-D傳感器網路。使用傳感器網路的原因是為了捕捉人的現場姿勢,因為我們無法使用單個傳感器捕捉人所有方向的姿勢。由每個傳感器捕獲的高位元速率資料流先在集中式PC處集中並進行處理。這甚至可以擴展到網際網路上的遠端電腦。 然後從RGB-D資訊生成點雲。點雲是一組分散的三維點,代表捕捉的物體的表面結構和顏色。然後多個傳感器產生的多個點雲彼此對齊以創建三維模型。迭代最近點(ICP)算法的修改版本是為此目的而引入的。 獲取的點雲可能包含許多雜訊,這是因為攝影機本身失真或是其他紅外線發射器所造成的干擾,而因為物體表面特性造成紅外線反射的狀況也會影響攝影機捕捉點雲的精確性,在這裡我們使用了兩種雜訊去除的演算法來消除點雲中的雜訊:基於距離以及基於密度的自適應去雜訊演算法。基於距離的去除雜訊演算法是在校準點雲之前執行的,在校準之後則是使用基於自適應密度的演算法。 彩色影像的解析度遠遠高於大多數的RGB-D攝影機的深度圖像。由於使用深度影像成點雲,因此點雲中的點雲數量取決於深度圖像的解析度。在引入新的三維點雲演算法後,利用高解析度彩色影像的優點去增加點雲的數量。 在點雲圖中的表面上可能包含不同大小的破孔,為了解決這些破孔,首先需要定位破孔的位置並加以填補,但由於攝影機無法捕捉到物體的背光面,或者因為輪廓而產生的遮罩,點雲生成的模型會產生較大的破孔,因此提出了一種基於二維修補的三維修復演算法來填補點雲中的大孔,最後再以明確的點雲來重建物體的表面。 在這裡我們進行了兩種實驗,在第一種實驗中,我們使用八個KinectV2攝影機作為RGB-D的攝影機,並進行上半身人物模型的獲取和重建。在第二個實驗中,則使用Intel RealSense SR300攝影機作為RGB-D攝影機,來捕捉在台灣被稱作布袋戲的戲劇人偶的表面,而實驗結果表明所提出的方法能夠產生一個更好的三維模型。 zh_TW
dc.description.abstract3D model reconstruction techniques using RGB-D information have been gaining a great attention of the researchers around the world in recent decades. RGB-D sensor is consisted of a color camera, infrared (IR) emitter and receiver. Hence, a RGB-D sensor can capture both a color image and depth image of a scene. Depth image provides the distance to each point in scene. RGB-D sensors are widely used in many research fields, such as in computer vision, computer graphics, and human computer interaction, due to its capacity of providing color and depth information. This dissertation presents research findings on calibrating information captured from a network of RGB-D sensors in order to reconstruct a 3D model of an object. We used a network of RGB-D sensors, which are interconnected in a network. The reason to use a network of sensors was to capture live gestures of a human as we cannot capture gestures from all views around the human using a single sensor. High bit rate streams captured by each sensor are first collected at a centralized PC for the processing. This even can be extended to a remote PC in the Internet. Point clouds are then generated from the RGB-D information. Point clouds are a set of scattered 3D points which represents the surface structure and the color of the object captured. Multiple point clouds generated from multiple sensors are then aligned with each other to create a 3D object. A modified version of the Iterative Closest Point (ICP) algorithm is introduced for this purpose. Captured point clouds may contain noise due to several reasons such as, inherent camera distortions, interference from infrared field of other sensors, and inaccurate infrared reflection due to object surface properties. Two noise removal algorithms are introduced to get rid of such noise in the point clouds namely adaptive distance-based noise removal and adaptive density-based noise removal algorithm. Adaptive distance-based noise removal is performed before the alignment of the point clouds and the adaptive density-based noise removal is performed after the alignment. Resolution of the color image is much higher than the depth image of most RGB-D sensors. Point clouds are generated using the depth information and hence, number of points in the point clouds depends on the resolution of the depth image. A new algorithm for 3D super resolution of the point clouds is introduced in order to increase the number of points in the point clouds using the advantage of the higher resolution color image. Point clouds may contain small and large holes in the surface. Small holes are first located and three small hole filling mechanisms are introduced. As the camera does not capture not facing surfaces to the camera and the surfaces behind another object which is referred as occlusion, large holes are created. A 3D inpainting algorithm based on 2D inpainting is proposed to fill the large holes in the point clouds. Finally, a surface is reconstructed using the point clouds clearly representing the captured 3D object. Two experiments were performed. In the first experiment 8 Microsoft Kinect version 2 sensors were used as the RGB-D sensor and human busts were captured and reconstructed. In the second experiment one Intel Realsense SR300 sensor was used as the RGB-D sensor to capture and reconstruct the surface of a type of puppets in Taiwan called Budaixi. Experimental results demonstrate that the proposed methods generate a better 3D model of the object captured. en_US
DC.subject3Dzh_TW
DC.subjectReconstructionzh_TW
DC.subjectICPzh_TW
DC.subjectNoise removalzh_TW
DC.subject3D inpaintingzh_TW
DC.subject3D videozh_TW
DC.subject3Den_US
DC.subjectKinecten_US
DC.subjectReconstructionen_US
DC.subjectICPen_US
DC.subjectNoise removalen_US
DC.subject3D inpaintingen_US
DC.subject3D videoen_US
DC.title使用多個RGB-D攝影機實現三維物件重建zh_TW
dc.language.isozh-TWzh-TW
DC.titleReconstruction of Three-Dimensional Objects Using A Network of RGB-D Sensorsen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明