博碩士論文 106226050 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:103 、訪客IP:3.23.103.9
姓名 呂修鋒(Hsiu-Fung Lu)  查詢紙本館藏   畢業系所 光電科學與工程學系
論文名稱 使用RGB-D掃描點雲進行室內3D幾何重建之自動模型之研究
(Study of Automatic Model Generation for Indoor Geometric Reconstruction Using RGB-D Scanning)
相關論文
★ 反射杯式與透鏡式 LED 車前燈之研究★ 具光子回收之雷射多工雙光源照明模組之研究
★ 影像式輝度量測儀之特性量測與校正★ 接觸式影像感測模組之導光棒設計
★ 數位光學相位共軛用於立體影像顯示之研究★ 歐規LED車頭燈反射杯製作材料表現之分析
★ 以柱狀透鏡陣列產生高對比度光型之光學設計★ 超高對比度之歐規LED車頭燈之設計與驗證
★ 取像還原真實色彩演算之研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文中,探討如何透過彩色相機拍攝的彩色影像,配合深度感測技術拍攝的深度影像生成點雲(Point Cloud),藉此還原室內三維場景分布,並建立自動化室內模型重建流程。在深度感測技術方面,使用飛時測距(Time of Flight , ToF)進行距離資料的獲取,藉此計算空間中點雲座標,來表示室內各物件之位置,並透過點雲密度分布關係,濾除飛時測距誤差所產生的點雲雜訊點。接著透過判別彩色影像當中的特徵點,在一系列的掃描圖像當中,計算前後兩個畫面的相機運動關係,藉此重建出室內全場域點雲分布圖。
在自動化建立模型方面,利用邊界選取、區域成長和法向量的密度分類等技術,萃取出室內牆面點雲資訊,並利用最小平方法進行平面方程式的擬合,最後進行室內牆面的模型建立。透過深度學習的語義分割,將室內點雲進行物件名稱的標記,並依其名稱進行分類和圖形檔案格式輸出,完成自動化建立模型流程。
摘要(英) In this thesis, we study to build up an auto-modeling calculation model for a precise3D indoor map based on point cloud generated by color cameras incorporated with depth sensor. In depth sensing, we adopt time of flight technology to catch depth information so that the 3D coordinates of the sensing points can be used to build up the 3D object. The point cloud density is applied to filter out the noise of the point cloud by the error of the time of flight measurement. Then through feature capturing between adjacent pictures, we can calculate the movement of the image capture so that we can build up the point cloud for the whole field. In auto-modeling, we adopt boundary selection, region growing, and density classification of the normal vector to extract the indoor point cloud information. Then we apply the least square method to generate plane equations, and the model of the surrounding walls can be generated. Finally, through the semantic segmentation of deep learning, the indoor point cloud is marked with the object name and is classified according to its name to form a graphic file format for auto-modeling.
Keywords: Depth sensing, Spatial point cloud, Point cloud reconstruction, Object classification, Semantic segmentation, Automated modeling
關鍵字(中) ★ 深度感測
★ 空間點雲
★ 點雲重建
★ 物件分類
★ 語義分割
★ 自動化建模
關鍵字(英) ★ Depth sensing
★ Spatial point cloud
★ Point cloud reconstruction
★ Object classification
★ Semantic segmentation
★ Automated modeling
論文目次 中文摘要 i
Abstract ii
致謝 iv
目錄 vi
圖目錄 x
表目錄 xiv
第一章 序論 1
1-1 研究背景與動機 1
1-2 相關研究與回顧 3
1-3 論文架構說明 6
第二章 基礎原理 7
2-1 引言 7
2-2 相機成像 7
2-2-1 針孔成像 8
2-2-2 相機座標與像素座標 9
2-2-3 世界座標與相機座標 11
2-3 區域成長 13
2-4 平均位移法 13
2-5 最小平方法 15
第三章 自動化建立室內幾何模型 17
3-1 引言 17
3-2 自動化建模流程 18
3-3 室內全場域點雲建立 18
3-3-1 深度影像生成點雲 19
3-3-2 全場域點雲還原 22
3-4 室內幾何模型建模流程 23
3-4-1 二維點雲邊界選取提取部分牆面點雲 24
3-4-2 區域成長法採集牆面資訊 25
3-4-3 平均位移法分類法向量 29
3-4-4 區域成長法進行牆面獨立分類 32
3-4-5 最小平方法擬合牆面模型 33
3-4-6 牆面特徵萃取 34
3-5 二維影像語義分割 36
3-5-1 彩色影像做二維語義分割 37
3-5-2 依語義分割結果進行點雲標記 39
3-5-3 依點雲標記結果分離點雲物件與模型輸出 40
第四章 自動化建立室內幾何模型誤差評估 43
4-1 自動化建模之模型誤差評估 43
4-1-1 掃描場景量測 43
4-1-2 模型誤差評估 44
4-1-3 長度誤差分析 45
4-1-4 角度誤差分析 46
4-2 平面方程式擬合誤差評估 47
4-3 修正誤差後掃描結果 49
4-4 新景掃描結果 51
第五章 結論與建議 54
參考文獻 56
中英文名詞對照 61
附錄A 飛時測距 64
附錄B 旋轉矩陣 68
附錄C 失真 70
附錄D 相機參數獲取 74
附錄E 特徵點 75
附錄F 特徵點判斷 77
附錄G 特徵點描述與匹配 81
附錄H 相機運動計算 83
附錄I 點雲降躁 87
附錄J 法向量估計 90
附錄K 3D模型檔 93
參考文獻 參考文獻
1. A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser, “Learning synergies between pushing and grasping with self-supervised deep reinforcement learning,” IEEE Int. C. Int. Robot.’2018 Digest, 4238-4245 (2018).
2. S. Borhade, M. Shah, P. Jadhav, D. Rajurkar, and A. Bhor, “Advanced driver assistance system,” I. Conf. Sens. Technol.’2012 Digest, 718-722 (2012).
3. C.-H. Chen, and K.-T. Song, “Complete coverage motion control of a cleaning robot using infrared sensors,” IEEE Int. C. Mech.’2005 Conference Proceedings, 543-548 (2005).
4. Z. Feng-Ji, G. Hai-Jiao, and K. Abe, “A mobile robot localization using ultrasonic sensors in indoor environment,” IEEE Int. Worksh. Rob. 97 52-57 (1997).
5. R. C. Smith, and P. Cheeseman, “On the representation and estimation of spatial uncertainty,” Int. J. Robotics. Res. 5, 56-68 (1986).
6. Z. GU, and H. LIU, “A survey of monocular simultaneous localization and mapping,” CAAI T. Intelligen. S. 10, 499-507 (2015).
7. R. Mur Artal, and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE T. Robot. 33, 1255-1262 (2017).
8. S. Kohlbrecher, O. Von Stryk, J. Meyer, and U. Klingauf, “A flexible and scalable slam system with full 3d motion estimation,” IEEE Int. S. Saf. Sec. R.’2011 Digest, 155-160 (2011).
9. M. Labbé, and F. Michaud, “Long-term online multi-session graph-based SPLAM with memory management,” Auton. Robot. 42, 1133-1150 (2018).
10. S. Thrun, and A. Bücken, “Integrating grid-based and topological maps for mobile robot navigation,” P. Nat. C. Art. Int.’1996 Digest, 944-951 (1996).
11. S. Thrun, “Learning metric-topological maps for indoor mobile robot navigation,” Artif. Intell. 99, 21-71 (1998).
12. R. B. Rusu, N. Blodow, Z. Marton, A. Soos, and M. Beetz, “Towards 3D object maps for autonomous household robots,” IEEE Int. C. Int. Robot.’2007 Digest, 3191-3198 (2007).
13. R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz, “Towards 3D point cloud based object maps for household environments,” Robot. Auton. Syst. 56, 927-941 (2008).
14. R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. W. Fitzgibbon, “Kinectfusion: Real-time dense surface mapping and tracking,” IEEE Int. S. Mixed. Aug.’2011 Digest, 127-136 (2011).
15. S. Bauer, A. Seitel, H. Hofmann, T. Blum, J. Wasza, M. Balda, H.-P. Meinzer, N. Navab, J. Hornegger, and L. Maier-Hein, Real-time range imaging in health care: a survey (Springer, 2013).
16. M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3D reconstruction at scale using voxel hashing,” ACM T. Graphic. 32, 169 (2013).
17. L. Gallo, A. P. Placitelli, and M. Ciampi, “Controller-free exploration of medical image data: Experiencing the Kinect,” CBMS 24, 1-6 (2011).
18. L. Vera, J. Gimeno, I. Coma, and M. Fernández, “Augmented mirror: interactive augmented reality system based on kinect,” IFIP C. Hum. Comp. Int.’2011 Digest, 483-486 (2011).
19. 周錫珉,具有空間三維幾何量測之八進位井字編碼結構光的設計,國立中央大學光電所碩士論文,中華民國一百零七年.
20. 林坤政,線性雷射掃描與結構光投影掃描於室內空間點雲建立之研究,國立中央大學光電所碩士論文,中華民國一百零六年.
21. K. Fukushima, S. Miyake, and T. Ito, “Neocognitron: A neural network model for a mechanism of visual pattern recognition,” IEEE T. Syst. Man. Cyb., 826-834 (1983).
22. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural. Inform. Pr. 25, 1097-1105 (2012).
23. K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” ICLR’2015 Conf. Proceedings (2014).
24. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” P. IEEE C. Comp. Vis. Pa.’2015 Digest, 1-9 (2015).
25. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” P. IEEE C. Comp. Vis. Pa.’2016 Digest, 770-778 (2016).
26. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vision. 115, 211-252 (2015).
27. B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene recognition using places database,” Adv. Neural. Inform. Pr. 27, 487-495 (2014).
28. B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, “Scene parsing through ade20k dataset,” P. IEEE C. Comp. Vis. Pa.’2017 Digest, 633-641 (2017).
29. B. Zhou, H. Zhao, X. Puig, T. Xiao, S. Fidler, A. Barriuso, and A. Torralba, “Semantic understanding of scenes through the ade20k dataset,” Int. J. Comput. Vision. 127, 302-321 (2019).
30. 王祖鎧,以光學二維影像輔助三維空間點雲之人工智慧自動建模技術,國立中央大學光電所碩士論文,中華民國一百零六年.
31. G. Bradski, and A. Kaehler, Learning OpenCV: Computer vision with the OpenCV library (O′Reilly Media, Inc., 2008).
32. F. Cheevasuvit, H. Maitre, and D. Vidal Madjar, “A robust method for picture segmentation based on a split-and-merge procedure,” Comput. Vision. Graph. 34, 268-281 (1986).
33. S. Y. Chen, W. C. Lin, and C. T. Chen, “Split-and-merge image segmentation based on localized feature analysis and statistical tests,” CVGIP Graph. Model. Im. Proc. 53, 457-475 (1991).
34. T. Pavlidis, and Y. T. Liow, “Integrating region growing and edge detection,” IEEE T. Pattern Anal. 12, 225-233 (1990).
35. S. L. Horowitz, “Picture segmentation by a directed split-and-merge procedure,” IJCPR, 424-433 (1974).
36. R. Adams, and L. Bischof, “Seeded region growing,” IEEE T. Pattern Anal. 16, 641-647 (1994).
37. M. Herbin, N. Bonnet, and P. Vautrot, “A clustering method based on the estimation of the probability density function and on the skeleton by influence zones. Application to image processing,” Pattern Recogn. Lett. 17, 1141-1150 (1996).
38. E. H. Ruspini, “A new approach to clustering,” Inform. Control. 15, 22-32 (1969).
39. A. Touzani, and J. G. Postaire, “Clustering by mode boundary detection,” Pattern Recogn. Lett. 9, 1-12 (1989).
40. J. Guo, J. Kim, and C. C. J. Kuo, “Fast and accurate moving object extraction technique for MPEG-4 object-based video coding,” Visual. Communication. 3653, 1210-1221 (1998).
41. E. Choi, and P. Hall, “Miscellanea. Data sharpening as a prelude to density estimation,” Biometrika 86, 941-947 (1999).
42. D. Comaniciu, and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE T. Pattern. Anal. 24, 603-619 (2002).
43. K. Popat, and R. W. Picard, “Cluster-based probability model and its application to image and texture processing,” IEEE T. Image. Process. 6, 268-284 (1997).
44. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” Eur. C. Comp. Vis. 13, 740-755 (2014).
45. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” P. IEEE C. Comp. Vis. Pa.’2017 Digest, 2881-2890 (2017).
46. N. Ketkar, Introduction to pytorch (Springer, 2017).
47. M. Hansard, S. Lee, O. Choi, and R. P. Horaud, Time-of-flight cameras: principles, methods and applications (Springer Science & Business Media, 2012).
48. S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: A survey,” IEEE Sens. J. 11, 1917-1926 (2011).
49. B. Kang, S.-J. Kim, S. Lee, K. Lee, J. D. Kim, and C.-Y. Kim, “Harmonic distortion free distance estimation in ToF camera,” Proc. SPIE 7864, 786403 (2011).
50. A. Kolb, E. Barth, R. Koch, and R. Larsen, “Time‐of‐flight cameras in computer graphics,” Comput. Graph. Forum 29, 141-159 (2010).
51. K. Shoemake, “Animating Rotation with Quaternion Curves,” Proc. Siggraph’85 Proceedings 245 - 254 (1985).
52. Matlab, “What Is Camera Calibration?,” https://www.mathworks.com/help/vision/ug/camera-calibration.html?w.mathworks.com.
53. A. Fetić, D. Jurić, and D. Osmanković, “The procedure of a camera calibration using Camera Calibration Toolbox for MATLAB,” MIPRO 35, 1752-1757 (2012).
54. J. Heikkila, and O. Silven, “A four-step camera calibration procedure with implicit image correction,” CVPR 97, 1106-1112 (1997).
55. Z. Zhang, “A flexible new technique for camera calibration,” IEEE T. Pattern Anal. 22 (2000).
56. E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski, “ORB: An efficient alternative to SIFT or SURF,” ICCV’2011 Digest, 2564-2571 (2011).
57. H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” Eur. C. Comp. Vis., 404-417 (2006).
58. E. Rosten, and T. Drummond, “Machine learning for high-speed corner detection,” Eur. C. Comp. Vis.’2006 Digest, 430-443 (2006).
59. P. C. Ng, and S. Henikoff, “SIFT: Predicting amino acid changes that affect protein function,” Nucleic acids Res. 31, 3812-3814 (2003).
60. P. L. Rosin, “Measuring corner properties,” Comput. Vis. Image Und. 73, 291-307 (1999).
61. M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” Eur. C. Comp. Vis.’2010 Digest, 778-792 (2010).
62. S. Fuchs, “Multipath interference compensation in time-of-flight camera images,” Int. C. Patt. Recog.’2010 Digest, 3583-3586 (2010).
63. J. Mure-Dubois, and H. Hügli, “Real-time scattering compensation for time-of-flight camera,” ICVS 5 (2007).
64. M. Reynolds, J. Doboš, L. Peel, T. Weyrich, and G. J. Brostow, “Capturing time-of-flight data with confidence,” CVPR’2011 Digest, 945-952 (2011).
65. Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Int. J. Comput. Vision. 13, 119-152 (1994).
66. H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, Surface reconstruction from unorganized points (ACM, 1992).
67. Y. Yu, “Surface reconstruction from unorganized points using self-organizing neural networks,” IEEE Visualization 99, 61-64 (1999).
68. M. Pauly, M. Gross, and L. P. Kobbelt, “Efficient simplification of point-sampled surfaces,” Proceedings. Visualization. 2, 163-170 (2002).
69. R. B. Rusu, “Semantic 3d object maps for everyday manipulation in human living environments,” KI-Künstliche Intelligenz 24, 345-348 (2010).
指導教授 楊宗勳 孫慶成(Tsung-Hsun Yang Ching-Cherng Sun) 審核日期 2019-10-4
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明