博碩士論文 107327018 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:67 、訪客IP:18.217.1.165
姓名 邱盟賀(Meng-He Chiu)  查詢紙本館藏   畢業系所 機械工程學系
論文名稱 以指數校正法改善陰影對聚焦成形術的影響
(Improve the effect of shadow on shape from focus by exponential correction)
相關論文
★ 外差光學式光柵干涉儀之研究★ 以二維影像重建三維彩色模型之色彩紋理貼圖技術與三維模型重建系統發展
★ 雷射干涉儀於共焦顯微系統之軸向定位控制★ 偏振干涉術使用在量測旋光效應及葡萄糖濃度
★ 準共光程干涉術之新式大尺度定位平台之研究★ 波長調制外差散斑干涉術應用於角度量測之研究
★ 全場光強差動式表面電漿共振偵測技術★ 基於全內反射波長調制外差干涉術小角度測量
★ 新型波長調制外差光源應用於位移量測★ 疊紋式自動準直儀系統
★ 雙影像多視角結構光轉三維點資料技術發展★ 偏振式駐波干涉儀應用於位移量測
★ 雙共焦顯微鏡用於物體厚度量測★ 以電漿診斷工具進行太陽電池用矽薄膜製程開發
★ 基於全反射共光程偏振干涉術之折射率量測技術★ 點繞射干涉儀應用於透鏡之像差量測
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-9-1以後開放)
摘要(中) 本研究目的在於以指數校正法改善陰影對於聚焦成形(Shape From Focus)技術所帶來的影響。聚焦成形技術主要應用於量測物體的表面形貌及三維重建,可全場性的量測物體。在機器視覺取像的過程當中,往往會因為外在環境的因素,如照明、震動等,進而導致拍攝出來的影像品質不佳。而本研究將針對陰影所帶來的影響做改善。
聚焦成形(SFF)的原理是將待測物放置於參考平面上,經由移動已知的距離,使CCD或待測物位移,位移過程中不間斷的重複擷取影像,以不同序列影像的清晰程度去判斷對焦時當下的位置,進而推算出深度圖。主要就是根據每張影像序列中鄰域內光強度的差異,判斷出其表面形貌的清晰程度,清晰的程度是以聚焦度值演算法計算而來。同鄰域中不同的影像張數,在模糊時的光強度差異小,而當影像清晰時,光強度差異大。本研究經實驗發現表面存在陰影時,會降低影像的光強度,造成表面平滑或缺少紋理,導致系統無法判別出每張影像的差別。因此,本研究提出一種指數校正法,能夠有效的改善陰影所帶來的影響。在實驗中,我們使用多種的演算法作為計算聚焦度值的方法,並用所提出的指數校正法對影像進行校正,分析校正前後的差異。透過實驗驗證指數校正法的可行性及改善能力,且利用白光干涉儀驗證本研究的聚焦成形系統之量測性能。其系統的空間解析度為1 m/pixel,軸向解析度介於50 ~ 100 m 間。以及指數校正法能夠降低至少10 %以上的誤差。
摘要(英) The purpose of this research is to improve the effect of shadows on Shape From Focus(SFF) with the exponential correction method. The SFF is mainly used to measure the
surface topography and three-dimensional reconstruction of objects, which can measure objects in full field. In the process of machine vision, external environmental factors, such as lighting, vibration, etc., often lead to poor image quality. This paper will improve the impact
of shadows.

The principle of SFF is to place the object to be measured on a reference plane, move the CCD or the object to be measured by a known distance, and continuously capture images during the displacement process, judge the focus based on the clarity, and then calculate the depth map. It is mainly based on the difference in light intensity in the neighborhood of each image to determine the value of focus of its surface topography. The value of focus is calculated by the focus algorithm. In this study, we found through experiments that shadows on the surface will reduce the light intensity of the image, resulting in a smooth surface or lack of texture. As a result, the system cannot distinguish the difference between each image. Therefore, before using the focus algorithm to calculate the image, the image is corrected. We use a variety of algorithms to calculate the focus value, and experiment with or without shadow on the surface. We use the exponential correction proposed by this research to correct the image and analyze the difference before and after correction. The measurement performance of this system is verified by a white light interferometer. The spatial resolution of the system is 1 um/pixel, and the axial resolution is between 50 ~ 100 um, and the experiment correction can reduce the error by at least 10 %.
關鍵字(中) ★ 聚焦成形
★ 照明
★ 陰影
★ 指數校正法
★ 深度圖
★ 三維重建
關鍵字(英) ★ shape from focus
★ illumination
★ shadow
★ exponential correction
★ depth map
★ 3D reconstruction
論文目次 摘要 I
Abstract II
致謝 III
目錄 IV
圖目錄 VII
表目錄 X
第一章 緒論 1
1-1 研究背景 1
1-2 文獻回顧 2
1-2-1 聚焦成形文獻回顧 2
1-2-2 陰影改善文獻回顧 6
1-3 研究動機 11
1-4 論文架構 12
第二章 基礎理論 13
2-1 光學鏡頭基本原理 13
2-1-1 一般鏡頭 13
2-1-2 遠心鏡頭 18
2-1-3 點光源擴散 20
2-2 聚焦成形術原理 22
2-3 聚焦度值演算法 24
2-3-1 拉普拉斯演算法 25
2-3-2 梯度演算法 26
2-3-3 統計演算法 28
2-3-4 小波演算法 29
2-3-5 離散餘弦演算法 30
2-4 小結 32
第三章 系統架構 33
3-1 系統設備元件 33
3-2 聚焦成形系統架構 34
3-3 系統流程圖 35
3-4 陰影的影響 37
3-5 指數校正 39
3-6 小結 41
第四章 實驗結果與討論 42
4-1 量測樣品 42
4-2 實驗條件 43
4-2-1 均勻照明 43
4-2-2 陰影存在 46
4-3 不同演算法之量測結果 48
4-3-1 拉普拉斯演算法 48
4-3-2 梯度演算法 51
4-3-3 統計演算法 53
4-3-4 小波演算法 56
4-3-5 離散餘弦演算法 59
4-3-6 其他演算法 61
4-4 量測結果驗證 64
4-5 系統性能分析 67
4-5-1 重複性 67
4-5-2 解析度 68
4-6 小結 71
第五章 誤差分析 72
5-1 系統誤差 72
5-1-1 系統光軸非垂直之位移誤差 72
5-1-2 電控位移平台非實際位移之誤差 74
5-2 隨機誤差 74
5-2-1 環境震動 75
5-2-2 機械震動 77
5-3 小結 77
第六章 結論與未來展望 78
6-1 結論 78
6-2 未來展望 78
參考文獻 79
參考文獻 [1] S. K. Nayar, and Y. Nakagawa, “Shape from focus,” Pattern Analysis and Machine Intelligence, Vol. 16, no. 8, pp. 824-831 (1990).
[2] S. Pertuz, D. Puig, and M. A. Garcia, “Analysis of focus measure operators for shape-from-focus, ” Pattern Recognition, 46, 1415-1432 (2013).
[3] A. S. Malik, and T. S. Choi, “Consideration of illumination effects and optimization of window size for accurate calculation of depth map for 3D shape recovery, ” Pattern Recognition, 40, 154-170 (2007).
[4] S. K. Nayar, and Y. Nakagawa, “Shape from focus: an effective approach for rough surfaces,” Robotics and Automation, 3863637 (1990).
[5] R. Minhas, A. A. Mohammed, Q. M. J. Wu, and M. A. Sid-Ahmed, “3D Shape from focus and depth map computation using steerable filters,” ICIAR, LNCS 5627, pp. 573-583
(2009).
[6] B. Jiang, L. Guo, and F. Chen, “Shape from focus using statistics methods,” ICS2 (2017).
[7] S.M. Mannan, A. S. Malik, H. N., and T.S. Choi, “Rectification of illumination in images used for shape from focus,” ISVC, LNCS 4292, pp. 166-175 (2006).
[8] M. S. Muhammad, A. S. Malik, and T. S. Choi, “Affects of illumination on 3D shape recovery,” Image Processing, 10423074 (2008).
[9] H. M. Merklinger, “The ins and outs of focus, ” ISBN 0-9695025-0-8 (1990).
[10] Z. Wang, W. Wu, X. Xu, and D. Xue, “Recognition and location of the internal corners of planar checkerboard calibration pattern image, ” Applied Mathematics and
Computation, Vol. 185, no. 2, pp. 894-906 (2007).
[11] M. Watanabe, and S. K. Nayar, “Telecentric optics for focus analysis,” Pattern Analysis and Machine Intelligence, Vol. 19, no. 12, pp. 1360-1365 (1997).
[12] S. K. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor ,” Pattern Analysis and Machine Intelligence, 18, no. 12, pp. 1186-1198 (1996).
[13] Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing algorithm selection in computer microscopy,” Intelligent Robots and Systems, 8734621 (2005).
[14] T. Yeo, S. O. Jayasooriah, and R. Sinniah, “Autofocusing for tissue microscopy,” Image Vision Comput, Vol. 11, pp.629-639 (1993).
[15] F. Groen, I. T. Young, and G. Ligthart, “A comparison of different focus functions for use in autofocus algorithms,” Cytometry 6, pp. 81-91 (1985).
[16] G. Yang, and B. J. Nelson, “Wavelet-Based autofocusing and unsupervised segmentation of microscopic images,” Conference on Intelligent Robots and Systems (2003).
[17] M. Subbarao, T. Choi, and A. Nikzad, “Focusing techniques,” J. Opt. Eng., Vol. 32, no. 11, pp. 2824-2836 (1993).
[18]M. Charfi, A. Nyeck, and A. Tosser, “Focusing criterion,” Electron. Lett., Vol. 27, no. 14, pp. 1233-1235 (1991).
[19] S. Y. Lee, Y. Kumar, J. M. Cho, S. W. Lee, and S. W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 18,no. 9, pp.1237-1246 (2008).
[20] R. McFeely, C. Hughes, E. Jones, and M. Glavin, “Removal of non-uniform complex and compound shadows from textured surfaces using adaptive directional smoothing and the thin plate model,” IET Image Process, Vol. 5, no. 3, pp. 233-248 (2011).
[21] Distortion Pattern (optics), from https://en.wikipedia.org/wiki/Distortion_(optics).
[22] Parallax Pattern, from
https://www.edmundoptics.com.tw/knowledge -center/application-notes/imaging/advanta
ges-of-telecentricity/.
[23] Telecentric Lens, from
https://www.edmundoptics.com.tw/knowledge -center/application-notes/imaging/telecentric-design-topics/.
[24] T-20 USAF 1951 Chart Standard Layout Product Specifications.
指導教授 李朱育(Ju-Yi Lee) 審核日期 2021-1-18
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明