博碩士論文 108327027 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:27 、訪客IP:3.147.65.51
姓名 林浩沅(Hao-Yuan Lin)  查詢紙本館藏   畢業系所 光機電工程研究所
論文名稱 自動化高動態範圍結合結構光用於三維點雲量測技術
(Automated high dynamic range with structured light for three dimension point cloud measurement technology)
相關論文
★ MOCVD晶圓表面溫度即時量測系統之開發★ MOCVD晶圓關鍵參數即時量測系統開發
★ 應用螢光顯微技術強化RDL線路檢測系統★ 基於人工智慧之PCB瑕疵檢測技術開發
★ 基於 YOLO 物件辨識技術之 PCB 多類型瑕疵檢測模型開發★ 全場相位式表面電漿共振技術
★ 波長調制外差式光柵干涉儀之研究★ 攝像模組之影像品質評價系統
★ 雷射修整之高速檢測-於修整TFT-LCD SHORTING BAR電路上之應用★ 光強差動式表面電漿共振感測術之研究
★ 準共光程外差光柵干涉術之研究★ 波長調制外差散斑干涉術之研究
★ 全場相位式表面電漿共振生醫感測器★ 利用Pigtailed Laser Diode 光學讀寫頭在角度與位移量測之研究
★ 複合式長行程精密定位平台之研究★ 紅外波段分光之全像集光器應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本研究目的在於開發「自動化高動態範圍之三維點雲量測技術」,用於量測工件待測物,並建立三維點雲資料,最後透過德國VDI規範進行結果評估,觀察有加入自動化高動態範圍技術的建模成效。由於三維點雲技術是由二維影像資訊轉為三維點雲資訊,所以在影像品質上會有特別要求,且容易受到環境及光源的影響。本研究透過提出的自動化高動態範圍技術(High Dynamic Range, HDR)解決過曝問題以及排除人為操作因素後,並結合結構光(Structured Light)與立體視覺技術(Stereo Vision, SV),可重建出待測物的三維點雲資料。
高動態範圍影像技術(HDR)的原理是利用不同曝光時間的影像合成方式,並將此應用在擷取到的影像上。由於使用高動態範圍技術需要多個曝光時間參數,而曝光時間參數的選擇更是尤其重要,因此本研究加入自動化的方式,將所需要的曝光時間透過程式計算,並完全排除人為手動控制的因素。本研究解決了因為過曝導致量測不到的問題及排除人為操作因素,並針對使用自動化高動態範圍技術之結果,透過德國規範VDI 2634評估,觀察解決問題後的建模成效。評估實驗中,總共進行三組實驗,每組實驗會量測五次。在工作距離400 mm、焦距16 mm、景深範圍150 mm、視野範圍360 mm × 260 mm的情況下,XYZ軸解析度皆可達0.1 mm,準確度可達-0.15 mm ~ 0.1 mm,精確度可達0.08 mm。
摘要(英) The purpose of this research is to develop the "automated high dynamic range 3D point cloud measurement technology" for measuring the workpiece to be measured, and establishing 3D point cloud data, and finally evaluate the results through the German VDI standard, and observe that the automatic high dynamic range is added. Modeling effectiveness of scope techniques. Since the 3D point cloud technology is converted from 2D image information to 3D point cloud information, there are special requirements on image quality and it is easily affected by the environment and light sources. This research solves the overexposure problem through the proposed automated high dynamic range technology (HDR) and eliminates human factors, combined with structured light and stereo vision technology (SV), can reconstruct Three-dimensional point cloud data of the object to be measured.
The principle of HDR is to use image synthesis methods with different exposure times and apply this to the captured images. Since the use of HDR requires multiple exposure time parameters, and the choice of exposure time parameters is particularly important. Therefore, this research adds an automated method to calculate the required exposure time through a program, and completely eliminates manual control factors. . This research solves the problem of undetectable measurement due to overexposure and eliminates human operation factors. Aiming at the result of using automated high dynamic range technology, the German standard VDI 2634 is evaluated through the German standard VDI 2634 to observe the modeling effect after solving the problem. In the evaluation experiment, a total of three sets of experiments are performed, and each set of experiments will be measured five times. In the case of a working distance of 400 mm, a focal length of 16 mm, a depth of field of 150 mm, and a field of view of 360 mm × 260 mm, the XYZ axis resolution can reach 0.1 mm, and the accuracy can reach -0.15 mm ~ 0.1 mm. Up to 0.08 mm.
關鍵字(中) ★ 高動態範圍
★ 自動化
★ 結構光
★ 立體視覺
★ 三維點雲
關鍵字(英) ★ high dynamic range
★ automation
★ structured light
★ stereo vision
★ 3D point cloud
論文目次 摘要.................................................................................................................................I
Abstract...............................................................................................................................II
致謝..............................................................................................................................III
目錄..............................................................................................................................IV
圖目錄..............................................................................................................................VI
表目錄..............................................................................................................................IX
第一章緒論........................................................................................................................1
1-1 研究背景.....................................................................................................................1
1-2 文獻回顧.....................................................................................................................2
1-2-1 結構光及立體視覺整合文獻回顧.................................................................2
1-2-2 相位解纏繞技術文獻回顧.............................................................................5
1-2-3 HDR技術文獻回顧.........................................................................................6
1-3 研究動機及目的.........................................................................................................9
1-4 論文架構...................................................................................................................10
第二章基礎理論..............................................................................................................11
2-1 結構光基本原理.......................................................................................................11
2-1-1格雷碼條紋(Gray Code, GC).......................................................................11
2-1-2格雷碼條紋的二值化....................................................................................15
2-1-3三步移相條紋(Three Step Phase Shift, PS)................................................18
2-1-4解纏繞技術(Unwrapping)............................................................................20
2-2 高動態範圍影像合成(High Dynamic Range, HDR).............................................24
2-3立體視覺技術(Stereo Vision, SV)............................................................................27
2-3-1立體視覺校正(Stereo vision calibration)....................................................27
2-3-2極線幾何(Epipolar geometry)......................................................................29
2-3-3矯正(Rectification)........................................................................................30
2-3-4視差(Disparity)..............................................................................................32
2-4小結............................................................................................................................33
第三章系統架構..............................................................................................................34
3-1拍攝系統架構............................................................................................................34
3-1-1三維點雲系統設備元件................................................................................34
3-1-2三維點雲系統架構........................................................................................35
3-1-3條紋設計........................................................................................................36
3-2計算曝光時間與三維點雲建模流程圖....................................................................38
3-3 自動化HDR.............................................................................................................40
3-4 遮罩的設計...............................................................................................................44
3-5雜訊處理....................................................................................................................46
3-6小結............................................................................................................................47
第四章三維點雲結果......................................................................................................48
4-1標準平面....................................................................................................................49
4-2標準球桿....................................................................................................................52
4-3自由曲面....................................................................................................................55
4-4小結............................................................................................................................57
第五章系統誤差分析......................................................................................................58
5-1實驗解析度與理論解析度的關係............................................................................58
5-2量測重複性標準差及準確度....................................................................................59
5-3誤差與深度及視差雜訊的關係................................................................................64
5-4小結............................................................................................................................68
第六章結論與未來展望..................................................................................................69
6-1結論............................................................................................................................69
6-2未來展望....................................................................................................................70
參考文獻..............................................................................................................................71
參考文獻 [1] G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on Gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Appl. Optics 36 (19), pp. 4463-4472 (1997).
[2] J. Salvi, J. Pag′es, and J. Batlle, “Pattern codi¬fication strategies in structured light Systems,” Pattern Recognition 37 (4), pp. 827-849 (2004).
[3] H. Zhao, B. Shi, C. Fernandez-Cull, S. K. Yeung, and R. Raskar, “Unbounded high dynamic range photography using a modulo camera,” IEEE Int. Conf. Comput. 1 (1), pp. 1-10 (2015).
[4] G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Optics 38 (31), pp. 6565-6573 (1999).
[5] D. Scharstein, and R. Szeliski, “High-accuracy stereo depth maps using structured light,” PROC. CVPR. IEEE. 1 (1), pp. 195-202 (2003).
[6] X. Han, and P. Huang, “Combined stereovision and phase shifting method: a new approach for 3-D Shape Measurement,” SOC Photo-Opt. Instru. 7389 (1), pp. 73893C (2009).
[7] Y. Zhang, and A. Yilmaz, “Structured light based 3D scanning for specular surface by the combination of gray code and phase shifting,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. XLI-B3 (1), pp. 137-142 (2016).
[8] Q. Zhang, X. Su, L. Xiang, and X. Sun, “3-D shape measurement based on complementary gray-code light,” Opt. Laser Eng. 50 (4), pp. 574-579 (2012).
[9] Y. An, and S. Zhang, “Three-dimensional absolute shape measurement by combining binary statistical pattern matching with phase-shifting methods,” Appl. Optics 56 (19), pp. 5418-5426 (2017).
[10] Z. Wu, W. Guo, and Q. Zhang, “High-speed three-dimensional shape measurement based on shifting Gray-code light,” Opt. Express 27 (16), pp. 22631-22644 (2019).
[11] S. Zhang, and S. T. Yau, “High dynamic range scanning technique,” Opt. Eng. 48 (3), pp.033604 (2009).
[12] H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces,” Opt. Laser Eng. 50 (10), pp. 1484-1493 (2012).
[13] B. Zhang, Y. Ouyang, and S. Zhang, “High dynamic range saturation intelligence avoidance for three-dimensional shape measurement,” IEEE Acm. Int. Symp. 1 (1), pp. 981-990 (2015).
[14] L. Rao and F. Da, “High dynamic range 3D shape determination based on automatic exposure selection,” J. VIS. CO. MMUN. IMAGE R. 50 (1), pp. 217-226 (2018).
[15] Gray code優勢,取自https://zh.wikipedia.org/wiki/%E6%A0%BC%E9%9B%B7%E7%A0%81
[16] D. Lanman and G. Taubin, Build Your Own 3D Scanner: 3D Photography for Beginners (SIGGRAPH 2009 Course Notes, 2009).
[17] Gray code編碼規則,取自https://openhome.cc/Gossip/AlgorithmGossip/GrayCode.htm
[18] D. Zheng and F. Da, “Self-correction phase unwrapping method based on Gray-code light,” Opt. Laser Eng. 50 (8), pp. 1130-1139 (2012).
[19] B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Laser Eng. 87 (1), pp. 83-89 (2016).
[20] P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Optics 45 (21), pp. 5086-5091 (2006).
[21] D. Bergmann, “New approach for automatic surface reconstruction with coded light,” SOC Photo-Opt. Instru. 2572 (1), pp. 2-9 (1995).
[22] D. Zheng, F. Da, Q. Kemao and H. S. Seah, “Phase-shifting profilometry combined with Gray-code patterns projection: unwrapping error removal by an adaptive median filter,” Opt. Express 25 (5), pp. 4700-4713 (2017).
[23] B. Li, Y. An, D. Cappelleri, J. Xu and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot Appl. 1 (1), pp. 86-103 (2017).
[24] H. Lin and Z. Song, “3D Reconstruction of Specular Surface via a Novel Structured Light Approach,” IEEE Int. Conf. on Information and Automation 1 (1), pp. 530-534 (2015).
[25] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on PAMI. 22 (11), pp. 1330-1334 (2000).
[26] R. Hartley, Multiple View Geometry in Computer Vision Second Edition (Cambridge University Press, 2011).
[27] H. J. Chien, “Beginner’s Guide to Fundamental Matrix, Essential Matrix and Camera Motion Recovery” (2016),取自 https://www.researchgate.net/publication/303522230_qiantanjichujuzhenbenzhijuzhenyuxiangjiyidong_Beginner′s_Guide_to_Fundamental_Matrix_Essential_Matrix_and_Camera_Motion_Recovery
[28] 單應性,取自 https://zh.wikipedia.org/wiki/%E5%8D%95%E5%BA%94%E6%80%A7
[29] Image rectification,取自
https://en.wikipedia.org/wiki/Image_rectification
[30] C. Loop and Z. Zhang, “Computing Rectifying Homographies for Stereo Vision,” In. CVPR. I (1), pp. 125-131 (1999).
[31] N. Qian, “Binocular Disparity and the Perception of Depth,” Neuron 18 (1), pp. 359-368 (1997).
[32] O. Krutikova, “Creation of a Depth Map from Stereo Images of Faces for 3D Model Reconstruction,” Procedia. Comput. SCI. 104 (1), pp. 452-459 (2017).
指導教授 李朱育(Ju-Yi Lee) 審核日期 2021-7-19
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明