博碩士論文 101582605 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:42 、訪客IP:18.190.153.77
姓名 華得尼(W.G.C.W. Kumara)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 使用多個RGB-D攝影機實現三維物件重建
(Reconstruction of Three-Dimensional Objects Using A Network of RGB-D Sensors)
相關論文
★ 基於edX線上討論板社交關係之分組機制★ 利用Kinect建置3D視覺化之Facebook互動系統
★ 利用 Kinect建置智慧型教室之評量系統★ 基於行動裝置應用之智慧型都會區路徑規劃機制
★ 基於分析關鍵動量相關性之動態紋理轉換★ 基於保護影像中直線結構的細縫裁減系統
★ 建基於開放式網路社群學習環境之社群推薦機制★ 英語作為外語的互動式情境學習環境之系統設計
★ 基於膚色保存之情感色彩轉換機制★ 一個用於虛擬鍵盤之手勢識別框架
★ 分數冪次型灰色生成預測模型誤差分析暨電腦工具箱之研發★ 使用慣性傳感器構建即時人體骨架動作
★ 基於多台攝影機即時三維建模★ 基於互補度與社群網路分析於基因演算法之分組機制
★ 即時手部追蹤之虛擬樂器演奏系統★ 基於類神經網路之即時虛擬樂器演奏系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 使用RGB-D信息的三維模型重建技術近幾十年來一直受到世界各地研究人員的高度關注。RGB-D傳感器由彩色攝像機,紅外線(IR)發射器和接收器組成。應用紅外線,RGB-D傳感器可以獲取場景的彩色影像和深度影像。深度影像提供到場景中每個點的與攝影機間距離。RGB-D傳感器由於其提供顏色和深度信息的能力而廣泛用於計算機視覺,計算機圖形學和人機交互等許多研究領域。

本文提出了對RGB-D傳感器網路中的影像資訊進行校準的研究結果,來重建一個物體的三維模型。我們使用了藉由無線網路互連的RGB-D傳感器網路。使用傳感器網路的原因是為了捕捉人的現場姿勢,因為我們無法使用單個傳感器捕捉人所有方向的姿勢。由每個傳感器捕獲的高位元速率資料流先在集中式PC處集中並進行處理。這甚至可以擴展到網際網路上的遠端電腦。

然後從RGB-D資訊生成點雲。點雲是一組分散的三維點,代表捕捉的物體的表面結構和顏色。然後多個傳感器產生的多個點雲彼此對齊以創建三維模型。迭代最近點(ICP)算法的修改版本是為此目的而引入的。

獲取的點雲可能包含許多雜訊,這是因為攝影機本身失真或是其他紅外線發射器所造成的干擾,而因為物體表面特性造成紅外線反射的狀況也會影響攝影機捕捉點雲的精確性,在這裡我們使用了兩種雜訊去除的演算法來消除點雲中的雜訊:基於距離以及基於密度的自適應去雜訊演算法。基於距離的去除雜訊演算法是在校準點雲之前執行的,在校準之後則是使用基於自適應密度的演算法。

彩色影像的解析度遠遠高於大多數的RGB-D攝影機的深度圖像。由於使用深度影像成點雲,因此點雲中的點雲數量取決於深度圖像的解析度。在引入新的三維點雲演算法後,利用高解析度彩色影像的優點去增加點雲的數量。

在點雲圖中的表面上可能包含不同大小的破孔,為了解決這些破孔,首先需要定位破孔的位置並加以填補,但由於攝影機無法捕捉到物體的背光面,或者因為輪廓而產生的遮罩,點雲生成的模型會產生較大的破孔,因此提出了一種基於二維修補的三維修復演算法來填補點雲中的大孔,最後再以明確的點雲來重建物體的表面。

在這裡我們進行了兩種實驗,在第一種實驗中,我們使用八個KinectV2攝影機作為RGB-D的攝影機,並進行上半身人物模型的獲取和重建。在第二個實驗中,則使用Intel RealSense SR300攝影機作為RGB-D攝影機,來捕捉在台灣被稱作布袋戲的戲劇人偶的表面,而實驗結果表明所提出的方法能夠產生一個更好的三維模型。
摘要(英) 3D model reconstruction techniques using RGB-D information have been gaining a great attention of the researchers around the world in recent decades. RGB-D sensor is consisted of a color camera, infrared (IR) emitter and receiver. Hence, a RGB-D sensor can capture both a color image and depth image of a scene. Depth image provides the distance to each point in scene. RGB-D sensors are widely used in many research fields, such as in computer vision, computer graphics, and human computer interaction, due to its capacity of providing color and depth information.

This dissertation presents research findings on calibrating information captured from a network of RGB-D sensors in order to reconstruct a 3D model of an object. We used a network of RGB-D sensors, which are interconnected in a network. The reason to use a network of sensors was to capture live gestures of a human as we cannot capture gestures from all views around the human using a single sensor. High bit rate streams captured by each sensor are first collected at a centralized PC for the processing. This even can be extended to a remote PC in the Internet.

Point clouds are then generated from the RGB-D information. Point clouds are a set of scattered 3D points which represents the surface structure and the color of the object captured. Multiple point clouds generated from multiple sensors are then aligned with each other to create a 3D object. A modified version of the Iterative Closest Point (ICP) algorithm is introduced for this purpose.

Captured point clouds may contain noise due to several reasons such as, inherent camera distortions, interference from infrared field of other sensors, and inaccurate infrared reflection due to object surface properties. Two noise removal algorithms are introduced to get rid of such noise in the point clouds namely adaptive distance-based noise removal and adaptive density-based noise removal algorithm. Adaptive distance-based noise removal is performed before the alignment of the point clouds and the adaptive density-based noise removal is performed after the alignment.

Resolution of the color image is much higher than the depth image of most RGB-D sensors. Point clouds are generated using the depth information and hence, number of points in the point clouds depends on the resolution of the depth image. A new algorithm for 3D super resolution of the point clouds is introduced in order to increase the number of points in the point clouds using the advantage of the higher resolution color image.

Point clouds may contain small and large holes in the surface. Small holes are first located and three small hole filling mechanisms are introduced. As the camera does not capture not facing surfaces to the camera and the surfaces behind another object which is referred as occlusion, large holes are created. A 3D inpainting algorithm based on 2D inpainting is proposed to fill the large holes in the point clouds. Finally, a surface is reconstructed using the point clouds clearly representing the captured 3D object.

Two experiments were performed. In the first experiment 8 Microsoft Kinect version 2 sensors were used as the RGB-D sensor and human busts were captured and reconstructed. In the second experiment one Intel Realsense SR300 sensor was used as the RGB-D sensor to capture and reconstruct the surface of a type of puppets in Taiwan called Budaixi. Experimental results demonstrate that the proposed methods generate a better 3D model of the object captured.
關鍵字(中) ★ 3D
★ Reconstruction
★ ICP
★ Noise removal
★ 3D inpainting
★ 3D video
關鍵字(英) ★ 3D
★ Kinect
★ Reconstruction
★ ICP
★ Noise removal
★ 3D inpainting
★ 3D video
論文目次 Contents
Abstract xi
Acknowledgements xiv
List of Figures xix
List of Tables xxiv
List of Algorithms xxv
Abbreviations xxvi
Symbols xxvii
1 Introduction 1
1.1 Background ................................... 1
1.2 Dissertation Organization ........................... 5
2 Related Work 7
2.1 Image to World Projection ........................... 7
2.2 Geometry Registration ............................. 10
2.3 3D Reconstruction ............................... 15
2.3.1 Volume-based approaches ....................... 15
2.3.2 Surface-based approaches ........................ 16
2.3.3 Depth map based approaches ..................... 16
2.4 Noise Removal .................................. 17
2.5 Surface Reconstruction ............................. 18
2.6 Registration Evaluation ............................ 20
3 Proposed Method 22
3.1 Data Collection ................................. 24
3.1.1 Point cloud generation ......................... 24

3.1.2 Object extraction ............................ 24
3.2 Alignment and Object Refinement ....................... 25
3.2.1 Adaptive distance-based noise removal ................ 25
3.2.2 Super resolution of the point cloud .................. 27
3.2.2.1 4-neighbor super resolution ................. 27
3.2.2.2 8-neighbor super resolution ................. 31
3.2.3 Point cloud alignment ......................... 32
3.2.4 Adaptive density-based noise removal ................. 33
3.3 Inpainting and Final Object Construction .................. 35
3.3.1 3D gradient ............................... 35
3.3.2 Small hole finding ............................ 39
3.3.2.1 Neighborhood collection and update ............ 39
3.3.2.2 Angle criterion ........................ 40
3.3.2.3 Half-disk criterion ...................... 40
3.3.2.4 Boundary criterion ...................... 41
3.3.2.5 Combining criterions ..................... 42
3.3.3 Small hole filling ............................ 42
3.3.3.1 Adding one point at largest angle .............. 42
3.3.3.2 Adding 4 points at top four largest angles ......... 43
3.3.3.3 Adding 3 points at largest angle ............... 43
3.3.3.4 Interpolation of points .................... 43
3.3.4 3D inpainting .............................. 43
3.3.5 Head top ................................. 46
4 Results and Discussion 47
4.1 Data Collection ................................. 48
4.1.1 Experimental setup ........................... 48
4.1.2 Point cloud generation ......................... 50
4.1.3 Object extraction ............................ 50
4.2 Alignment and Object Refinement ....................... 51
4.2.1 Adaptive distance-based noise removal ................ 51
4.2.2 Super resolution of point clouds .................... 53
4.2.3 Time synchronization of the network of RGB-D sensors ....... 54
4.2.4 Alignment of point clouds ....................... 56
4.2.5 Adaptive density-based noise removal ................. 59
4.3 Inpainting and Final Object Construction .................. 62
4.3.1 3D gradient ............................... 62
4.3.2 Small hole finding ............................ 64
4.3.3 Small hole filling ............................ 65
4.3.4 Surface reconstruction ......................... 65
4.3.5 Budaixi experiment results ....................... 68
4.4 Analysis ..................................... 68
xvii
5 Conclusion and Future Works 86
Bibliography 88
參考文獻 [1] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
[2] Fausto Bernardini, Joshua Mittleman, Holly Rushmeier, Cla ́udio Silva, and Gabriel Taubin. The ball-pivoting algorithm for surface reconstruction. IEEE transactions on visualization and computer graphics, 5(4):349–359, 1999.
[3] Gerhard H Bendels, Ruwen Schnabel, and Rheinhard Klein. Detecting holes in point set surfaces. 2006.
[4] Alexandru Telea. An image inpainting technique based on the fast marching method. Journal of graphics tools, 9(1):23–34, 2004.
[5] Antonio Criminisi, Patrick P ́erez, and Kentaro Toyama. Region filling and object re- moval by exemplar-based image inpainting. IEEE Transactions on image processing, 13(9):1200–1212, 2004.
[6] Luciano Silva, Olga Regina Pereira Bellon, and Kim L Boyer. Precision range image registration using a robust surface interpenetration measure and enhanced genetic algorithms. IEEE transactions on pattern analysis and machine intelligence, 27(5): 762–776, 2005.
[7] Lu Xia, Chia-Chih Chen, and Jake K Aggarwal. Human detection using depth information by kinect. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on, pages 15–22. IEEE, 2011.
[8] Brian Curless and Marc Levoy. A volumetric method for building complex mod- els from range images. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 303–312. ACM, 1996.
[9] Paul J Besl, Neil D McKay, et al. A method for registration of 3-d shapes. IEEE Transactions on pattern analysis and machine intelligence, 14(2):239–256, 1992.
[10] Richard A Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J Davison, Pushmeet Kohi, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In Mixed and augmented reality (ISMAR), 2011 10th IEEE international symposium on, pages 127–136. IEEE, 2011.
[11] Mark Harris, Shubhabrata Sengupta, and John D Owens. Parallel prefix sum (scan) with cuda. GPU gems, 3(39):851–876, 2007.
[12] Oleg Alexander, Mike Rogers, William Lambeth, Matt Chiang, and Paul Debevec. Creating a photoreal digital actor: The digital emily project. In Visual Media Pro- duction, 2009. CVMP’09. Conference for, pages 176–187. IEEE, 2009.
[13] Steven M Seitz and Charles R Dyer. Photorealistic scene reconstruction by voxel coloring. International Journal of Computer Vision, 35(2):151–173, 1999.
[14] Jing Tong, Jin Zhou, Ligang Liu, Zhigeng Pan, and Hao Yan. Scanning 3d full human bodies using kinects. IEEE transactions on visualization and computer graphics, 18 (4):643–650, 2012.
[15] Wanbin Song, Seokmin Yun, Seung-Won Jung, and Chee Sun Won. Rotated top- bottom dual-kinect for improved field of view. Multimedia Tools and Applications, 75(14):8569–8593, 2016.
[16] Fabian Lorenzo Dayrit, Yuta Nakashima, Tomokazu Sato, and Naokazu Yokoya. In- creasing pose comprehension through augmented reality reenactment. Multimedia Tools and Applications, 76(1):1291–1312, 2017.
[17] Nan Geng, Fufeng Ma, Huijun Yang, Boyang Li, and Zhiyi Zhang. Neighboring constraint-based pairwise point cloud registration algorithm. Multimedia Tools and Applications, 75(24):16763–16780, 2016.
[18] Enrico Cappelletto, Pietro Zanuttigh, and Guido M Cortelazzo. 3d scanning of cul- tural heritage with consumer depth cameras. Multimedia Tools and Applications, 75 (7):3631–3654, 2016.
[19] Wolfgang Paier. Acquisition of 3d-head-models using slr-cameras and rgbz-sensors. Master’s thesis, Department of Mathematics and Computer Science, Free University of Berlin, Berlin, Germany, 6 2003.
[20] Ben Bellekens, Vincent Spruyt, Rafael Berkvens, Rudi Penne, and Maarten Weyn. A benchmark survey of rigid 3d point cloud registration algorithm. Int. J. Adv. Intell. Syst, 8:118–127, 2015.
[21] Radu Bogdan Rusu, Nico Blodow, and Michael Beetz. Fast point feature histograms (fpfh) for 3d registration. In Robotics and Automation, 2009. ICRA’09. IEEE Inter- national Conference on, pages 3212–3217. IEEE, 2009.
[22] Radu Bogdan Rusu, Nico Blodow, Zoltan Csaba Marton, and Michael Beetz. Aligning point cloud views using persistent feature histograms. In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pages 3384– 3391. IEEE, 2008.
[23] Federico Tombari, Samuele Salti, and Luigi Di Stefano. Unique signatures of his- tograms for local surface description. In European conference on computer vision, pages 356–369. Springer, 2010.
[24] Paul Scovanner, Saad Ali, and Mubarak Shah. A 3-dimensional sift descriptor and its application to action recognition. In Proceedings of the 15th ACM international conference on Multimedia, pages 357–360. ACM, 2007.
[25] Bastian Steder, Radu Bogdan Rusu, Kurt Konolige, and Wolfram Burgard. Narf: 3d range image features for object recognition. In Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), volume 44, 2010.
[26] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
[27] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521 (7553):436–444, 2015.
[28] Hangbin Wu and Hongchao Fan. Registration of airborne lidar point clouds by matching the linear plane features of building roof facets. Remote Sensing, 8(6):447, 2016.
[29] Hugh Durrant-Whyte and Tim Bailey. Simultaneous localization and mapping: part i. IEEE robotics & automation magazine, 13(2):99–110, 2006.
[30] Jakob Engel, Thomas Scho ̈ps, and Daniel Cremers. Lsd-slam: Large-scale direct monocular slam. In European Conference on Computer Vision, pages 834–849. Springer, 2014.
[31] Szymon Rusinkiewicz and Marc Levoy. Efficient variants of the icp algorithm. In 3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on, pages 145–152. IEEE, 2001.
[32] Ameesh Makadia, Alexander Patterson, and Kostas Daniilidis. Fully automatic regis- tration of 3d point clouds. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 1, pages 1297–1304. IEEE, 2006.
[33] Li Zhang, Brian Curless, and Steven M Seitz. Rapid shape acquisition using color structured light and multi-pass dynamic programming. In 3D Data Processing Vi- sualization and Transmission, 2002. Proceedings. First International Symposium on, pages 24–36. IEEE, 2002.
[34] Hongyi Xu and Jernej Barbiˇc. Signed distance fields for polygon soup meshes. In Pro- ceedings of Graphics Interface 2014, pages 35–41. Canadian Information Processing Society, 2014.
[35] Kiriakos N Kutulakos and Steven M Seitz. A theory of shape by space carving. International journal of computer vision, 38(3):199–218, 2000.
[36] Kiriakos N Kutulakos. Approximate n-view stereo. In European Conference on Com- puter Vision, pages 67–83. Springer, 2000.
[37] Xenophon Zabulis and Kostas Daniilidis. Multi-camera reconstruction based on sur- face normal estimation and best viewpoint selection. In 3D Data Processing, Vi- sualization and Transmission, 2004. 3DPVT 2004. Proceedings. 2nd International Symposium on, pages 733–740. IEEE, 2004.
[38] David Gallup, Jan-Michael Frahm, Philippos Mordohai, Qingxiong Yang, and Marc Pollefeys. Real-time plane-sweeping stereo with multiple sweeping directions. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.
[39] Yasutaka Furukawa and Jean Ponce. Accurate, dense, and robust multiview stereop- sis. IEEE transactions on pattern analysis and machine intelligence, 32(8):1362–1376, 2010.
[40] Carlos Hern ́andez Esteban and Francis Schmitt. Silhouette and stereo fusion for 3d object modeling. Computer Vision and Image Understanding, 96(3):367–392, 2004.
[41] Michael Goesele, Brian Curless, and Steven M Seitz. Multi-view stereo revisited. In
Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 2402–2409. IEEE, 2006.
[42] Benjamin Huhle, Timo Schairer, Philipp Jenke, and Wolfgang Straßer. Robust non- local denoising of colored depth data. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW’08. IEEE Computer Society Conference on, pages 1–7. IEEE, 2008.
[43] Irene Reisner-Kollmann and Stefan Maierhofer. Consolidation of multiple depth maps. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pages 1120–1126. IEEE, 2011.
[44] Kyis Essmaeel, Luigi Gallo, Ernesto Damiani, Giuseppe De Pietro, and Albert Di- panda. Comparative evaluation of methods for filtering kinect depth data. Multimedia Tools and Applications, 74(17):7331–7354, 2015.
[45] Fr ́ed ́eric Cazals and Joachim Giesen. Delaunay triangulation based surface recon- struction. Effective computational geometry for curves and surfaces, pages 231–276, 2006.
[46] Nina Amenta, Sunghee Choi, and Ravi Krishna Kolluri. The power crust. In Proceed- ings of the sixth ACM symposium on Solid modeling and applications, pages 249–266. ACM, 2001.
[47] Jean-Daniel Boissonnat and Steve Oudot. Provably good sampling and meshing of surfaces. Graphical Models, 67(5):405–451, 2005.
[48] Joshua Podolak and Szymon Rusinkiewicz. Atomic volumes for mesh completion. In Symposium on Geometry Processing, pages 33–41, 2005.
[49] Ravikrishna Kolluri, Jonathan Richard Shewchuk, and James F O’Brien. Spectral surface reconstruction from noisy point clouds. In Proceedings of the 2004 Euro- graphics/ACM SIGGRAPH symposium on Geometry processing, pages 11–21. ACM, 2004.
[50] Patrick Labatut, J-P Pons, and Renaud Keriven. Robust and efficient surface recon- struction from range data. In Computer graphics forum, volume 28, pages 2275–2290. Wiley Online Library, 2009.
[51] Alexander Hornung and Leif Kobbelt. Robust reconstruction of watertight 3 d models from non-uniformly sampled point clouds without normal information. In Symposium on geometry processing, pages 41–50, 2006.
[52] Jonathan C Carr, Richard K Beatson, Jon B Cherrie, Tim J Mitchell, W Richard Fright, Bruce C McCallum, and Tim R Evans. Reconstruction and representation of 3d objects with radial basis functions. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 67–76. ACM, 2001.
[53] Yutaka Ohtake, Alexander Belyaev, and Marc Alexa. Sparse low-degree implicit sur- faces with applications to high quality rendering, feature extraction, and smoothing. In Proc. Symp. Geometry Processing, pages 149–158, 2005.
[54] Yukie Nagai, Yutaka Ohtake, and Hiromasa Suzuki. Smoothing of partition of unity implicit surfaces for noise robust surface reconstruction. In Computer Graphics Fo- rum, volume 28, pages 1339–1348. Wiley Online Library, 2009.
[55] Hugues Hoppe, Tony DeRose, Tom Duchamp, John McDonald, and Werner Stuetzle. Surface reconstruction from unorganized points, volume 26. ACM, 1992.
[56] Chandrajit L Bajaj, Fausto Bernardini, and Guoliang Xu. Automatic reconstruc- tion of surfaces and scalar fields from 3d scans. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages 109–118. ACM, 1995.
[57] Patrick Mullen, Fernando De Goes, Mathieu Desbrun, David Cohen-Steiner, and Pierre Alliez. Signing the unsigned: Robust surface reconstruction from raw pointsets. In Computer Graphics Forum, volume 29, pages 1733–1741. Wiley Online Library, 2010.
[58] Michael Kazhdan and Hugues Hoppe. Screened poisson surface reconstruction. ACM Transactions on Graphics (TOG), 32(3):29, 2013.
[59] Pierre Alliez, David Cohen-Steiner, Yiying Tong, and Mathieu Desbrun. Voronoi- based variational reconstruction of unoriented point sets. In Symposium on Geometry processing, volume 7, pages 39–48, 2007.
[60] Josiah Manson, Guergana Petrova, and Scott Schaefer. Streaming surface recon- struction using wavelets. In Computer Graphics Forum, volume 27, pages 1411–1420. Wiley Online Library, 2008.
[61] Fatih Calakli and Gabriel Taubin. Ssd: Smooth signed distance surface reconstruc- tion. In Computer Graphics Forum, volume 30, pages 1993–2002. Wiley Online Library, 2011.
[62] Diego Nehab, Szymon Rusinkiewicz, James Davis, and Ravi Ramamoorthi. Effi- ciently combining positions and normals for precise 3d geometry. In ACM transac- tions on graphics (TOG), volume 24, pages 536–543. ACM, 2005.
[63] Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. Poisson surface reconstruc- tion. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing, SGP ’06, pages 61–70, Aire-la-Ville, Switzerland, Switzerland, 2006. Eurographics Association. ISBN 3-905673-36-3. URL http://dl.acm.org/citation.cfm?id= 1281957.1281965.
[64] G ́erard Blais and Martin D. Levine. Registering multiview range data to create 3d computer objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(8):820–824, 1995.
[65] Soon-Yong Park and Murali Subbarao. New technique for registration and integration of partial 3d models. In Machine Vision and Three-Dimensional Imaging Systems for Inspection and Metrology II, volume 4567, pages 65–75. International Society for Optics and Photonics, 2002.
[66] Gerald Dalley and Patrick Flynn. Range image registration: A software platform and empirical evaluation. In 3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on, pages 246–253. IEEE, 2001.
指導教授 施國琛(Prof. Timothy K. Shih) 審核日期 2018-1-29
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明