博碩士論文 109226020 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:95 、訪客IP:3.21.106.69
姓名 蔡翔軒(Shiang-Shiuan Tsai)  查詢紙本館藏   畢業系所 光電科學與工程學系
論文名稱 深度攝影機鏡頭設計並以其量測物體與影像速度
相關論文
★ 白光LED於住宅照明之設計與應用★ 超廣角車用鏡頭設計
★ 適用於色序式微型投影機之微透鏡陣列積分器光學系統研製★ 發光二極體色溫控制技術及其於色序式微型投影機之應用
★ 光學變焦之軌跡優化控制★ LED光源暨LED與太陽光混和照明於室內照明之模擬與分析
★ 利用光展量概念之微型投影機光學設計方法與實作★ 光學顯微鏡之鏡頭設計
★ 手機上隱藏式指紋辨識設計★ DLP微型投影系統之光路設計
★ 高效率藍光碟片讀取頭★ 模組化雙波長光學讀寫頭的設計與光學讀寫頭應用在角度量測的研究
★ 數位相機之鏡頭設計★ 單光電偵測器之複合式光學讀寫頭
★ 三百萬畫素二點七五倍光學變焦手機鏡頭設計★ 稜鏡玻璃選取對色差的影響與校正
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2027-8-23以後開放)
摘要(中) 本文前半部分設計一顆一百萬畫素深度攝影機鏡頭,由八片玻璃非球面鏡片所組成,並包含兩片平板玻璃,一片為紅外截止濾光片,一片為感測器保護玻璃,鏡頭長度(從鏡頭第一面至成像面)為18.94 mm,所有鏡片的有效口徑皆小於12 mm,鏡頭全視角為100.44度鏡頭有效焦距為1.88 mm,F-number為2。
在設計時由於鏡頭視角較大,導致相對照度較低,故利用光暈控制離軸立體角,使相對照度提高,並以公式計算立體角和相對照度,與Code V數值做比較,以及探討Through Focus MTF與焦深和景深關係。
因鏡頭固定,在鏡頭設計物距為-755.5 mm,像距為1.88469 mm條件下,此鏡頭最終設計結果,焦深範圍為-0.02343 mm至0.02207 mm,取物距範圍由-至-133.9 mm為景深範圍,MTF在60 lp/mm時大於0.845,橫向色差小於感測器一個畫素大小,光學畸變和電視畸變皆等於1%,相對照度大於64.23%。
後半部分為進行了六種不同情況以Intel RealSense D455深度攝影機量測物體與影像速度之實驗,以深度攝影機拍攝移動物體,先計算影像速度再以公式反推物體速度,將以深度攝影機量測物體的速度與實際的物體速度作比較,並計算兩者之間的誤差量和分析產生誤差可能的原因,最終六個實驗量測速度的誤差量皆小於6%。
摘要(英) The first half of this article designed a 1-million pixel depth camera lens, which consists of eight glass aspherical lenses, and includes two flat glass, one for the infrared cut filter, one for the sensor protection glass, the lens length (from the first side of the lens to the imaging surface) is 18.94 mm, the effective aperture of all lenses is less than 12 mm, the full angle of view of the lens is 100.44 degree, the effective focal length of the lens is 1.88 mm, and the F-number is 2.
Because of the large field of view of the lens when we designed, the relative illumination was low. Therefore, the vignetting was used to control the off-axis solid angle to improve the relative illumination. Then, we used the formula to calculate the solid angle and relative illuminance, and compared with the value of Code V. Then, explore the relationship between the Through Focus MTF and the depth of focus and depth of field.
Because the lens is fixed, under the condition that the lens design object distance is 755.5 mm and the image distance is 1.885 mm, the final design result of this lens has a depth of focus range of -0.02343 mm to 0.02207 mm, and the object distance range from - to -133.9 mm which is depth of field, MTF is greater than 0.845 at 60 lp/mm, lateral chromatic aberration is less than 1 pixel size of the sensor, optical distortion and TV distortion are both equal to 1%, and relative illumination is greater than 64.23%.
In the second half of the experiment, the Intel RealSense D455 depth camera was used to measure the speed of the object and the image in six different situations. The moving object was photographed with the depth camera. The image speed was calculated first, and then the object speed was reversed by the formula. Compare the speed of the object measured by the depth camera with the actual speed of the object, and the error amount between the two is calculated and the possible causes of the error are analyzed. The error amount of the final six experimental measured velocities is all less than 6%.
關鍵字(中) ★ 深度攝影機
★ 深度攝影機鏡頭設計
★ Through Focus MTF
★ 焦深
★ 景深
★ 光暈
★ 相對照度
★ 物體與影像速度量測
關鍵字(英) ★ Depth Camera
★ Depth Camera Lens Design
★ Through Focus MTF
★ Depth of Focus
★ Depth of Field
★ Vignetting
★ Relative Illuminance
★ Object and Image Speed Measurement
論文目次 摘要 i
Abstract ii
誌謝 iv
目錄 vi
圖目錄 ix
表目錄 xii
1 第一章 緒論 1
1-1 研究動機 1
1-2 文獻回顧 3
1-3 論文架構 14
2 第二章 理論 15
2-1 鏡頭物像關係 15
2-2 深度攝影機鏡頭設計方法 16
2-2-1 感測器規格 16
2-2-2 鏡頭規格 16
2-2-3 設計目標 18
2-2-4 設計鏡片圖和鏡頭的設計資料 20
2-3 Through Focus MTF與焦深和景深關係 22
2-4 相對照度 24
2-4-1 圓立體角與橢圓立體角 24
2-4-2 定義參考光線與像方數值孔徑 26
2-4-3 投影立體角與正向立體角關係 29
2-4-4 相對照度公式 30
2-5 畸變 31
2-5-1 光學畸變 31
2-5-2 電視畸變 32
2-6 光暈(Vignetting) 33
2-6-1 光暈定義 33
2-6-2 光暈係數 34
2-6-3 無設定光暈係數設計與有設定光暈係數設計之比較 34
2-7 橫向放大率與縱向放大率 38
3 第三章 深度攝影機鏡頭設計結果 42
3-1 深度攝影機鏡頭設計成像品質 42
3-1-1 MTF 42
3-1-2 畸變 44
3-1-3 橫向色差 45
3-2 公差分析 45
3-2-1 公差分析過程 46
3-2-2 公差分析結果 48
3-3 立體角與相對照度計算 53
3-3-1 0半視角立體角計算 53
3-3-2 50.22半視角立體角計算 56
3-3-3 0半視角立體角和50.22半視角立體角之Code V數值與公式計算結果比較 59
3-3-4 照度與相對照度計算 60
4 第四章 以深度攝影機量測物體與影像速度 61
4-1 實驗流程 61
4-2 物體橫向位移之影像速度量測 65
4-3 物體縱向位移之影像速度量測 77
5 第五章 結論與未來展望 85
5-1 結論 85
5-2 未來展望 87
參考文獻 88
參考文獻 [1]Annotated Biography of CHRIS J. CONDON PIONEER of 3-D,
http://3d.hollywoodfilmsinternational.com/CHRIS-CONDON-BIO-v2.html
[2]Chris J. Condon, “Film projection lens system for 3-D movies,” U.S. patent 4,235,503 (Nov. 25 1980).
[3]Chris J. Condon, “Motion picture system for single strip 3-D filming,” U.S. patent 4,464,028 (Aug. 7 1984).
[4]Wang Qionghua, Wang Aihong, “Survey on stereoscopic three-dimensional display,” Journal of Computer Applications, 30 (3), 579-581 (2010).
[5]Robert D. Bock, “Low-cost 3D security camera,” Proc. SPIE 10643, Autonomous Systems: Sensors, Vehicles, Security, and the Internet of Everything, 106430E (3 May 2018).
[6]Shanshan Wang, Jiong Liang, Xiu Li, Fan Su, and Zhenshan Zhao, “A calibration method on 3D measurement based on structured-light with single camera,” Proc. SPIE 11434, 2019 International Conference on Optical Instruments and Technology: Optical Systems and Modern Optoelectronic Instruments, 114341H (12 March 2020).
[7]Raul Vargas, Lenny A. Romero, Song Zhang, and Andres G. Marrugo, “Toward high accuracy measurements in structured light systems,” Proc. SPIE 12098, Dimensional Optical Metrology and Inspection for Practical Applications XI, 120980A (31 May 2022).
[8]Xin Qiao, Chenyang Ge, Huimin Yao, Pengchao Deng, and Yanhui Zhou, “Valid depth data extraction and correction for time-of-flight camera,” Proc. SPIE 11433, Twelfth International Conference on Machine Vision (ICMV 2019), 114332K (31 January 2020).
[9]Yoon-Seop Lim, Sung-Hyun Lee, Wook-Hyeon Kwon, and Yong-Hwa Park, “Depth image super resolution method for time-of-flight camera based on machine learning,” Proc. SPIE 12019, AI and Optical Data Sciences III, 120190S (2 March 2022).
[10]Yung-Lin Chen, Chuan-Chung Chang, Ludovic Angot, Chir-Weei Chang, and Chung-Hao Tien, “Single-shot depth camera lens design optimization based on a blur metric,” Proc. SPIE 7787, Novel Optical Systems Design and Optimization XIII, 77870A (2010).
[11]Hudman. Joshua Mark, Masalkar. Prafulla, “Optical modules for use with depth cameras,” U.S. Patent 2015/0070489 A1 (Mar. 12 2015).
[12]Hudman. Joshua Mark, Masalkar. Prafulla, “Illumination modules that emit structured light,” U.S. Patent 9,443,310 B2 (Sep. 13 2016).
[13]Hudman. Joshua Mark, Masalkar. Prafulla, “Illumination light projection for a depth camera,” U.S. Patent 9,891,309 B2 (Feb. 13 2018).
[14]Grunnet-Jepsen. Anders, Takagi. Akihiro, Sweetser. John N., Winer. Paul, “Projector for active stereo depth sensors,” U.S. Patent 10,771,767 B2 (Sep. 8 2020).
[15]Bakin. Dmitry, Rossi. Markus, Engelhardt. Kai, “Illumination assembly for 3D data acquisition,” U.S. Patent 10,761,244 B2 (Sep. 1 2020).
[16]D. H. Kim, Y. G. Go, and S. M. Choi, “An Aerial Mixed-Reality Environment for First-Person-View Drone Flying,” Applied Sciences, 10(16), 5436 (2020).
[17]Adel Khelifi, Gabriele Ciccone, Mark Altaweel, Tasnim Basmaji, and Mohammed Ghazal, “Autonomous Service Drones for Multimodal Detection and Monitoring of Archaeological Sites,” Applied Sciences, 11(21), 10424 (2021).
[18]Y. Xu, M. Tong, W. Ming, Y. Lin, W. Mai, W. Huang, and Z. Chen, “A Depth Camera–Based, Task-Specific Virtual Reality Rehabilitation Game for Patients with Stroke: Pilot Usability Study,” JMIR Serious Games, 9(1), e20916 (2021).
[19]Sk. Mohammadul Haque, Avishek Chatterjee, and Venu Madhav Govindu, “High Quality Photometric Reconstruction using a Depth Camera,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2275-2282, Columbus, Ohio (24 June 2014).
[20]Alina Kuznetsova, Laura Leal-Taixe, and Bodo Rosenhahn, “Real-time sign language recognition using a consumer depth camera,” Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 83-90, Sydney, NSW, Australia (2 December 2013).
[21]Intel® RealSense™ Depth Camera D455,
https://www.intelrealsense.com/depth-camera-d455/
[22]Intel, “Intel RealSense D400 Series Product Family Datasheet,”
https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet
[23]Meet ZED 2, https://www.stereolabs.com/zed-2/
[24]FinePix 3D Digital Camera FinePix REAL 3D W3,
https://www.fujifilm.com.hk/products/3d/camera/finepix_real3dw3/specifications/
[25]C. W. Chang, Y. L. Chen, C. C. Chang, and P. C. Chen, “Phase coded optics for computational imaging systems,” Proc. SPIE 7723, 772317, Conference: SPIE_Optics, Photonics, and Digital Technologies for Multimedia Applications (April 2010).
[26]Y. Wang, L. Wang and Y. Yan, “Rotational speed measurement through digital imaging and image processing,” 2017 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), 1-6 (22 May 2017).
[27]OmniVision Technologies, “Color CMOS 1 Megapixel (1280x800) Image Sensor with OmniPixel®3‑GS Technology,” https://www.ovt.com/products/ov09782-ga4a/
[28]Synopsys Inc., “Optimization,” Code V Reference Manuals, Version 11.5.
[29]Wikipedia, Solid angle,
https://en.wikipedia.org/wiki/Solid_angle
[30]周柏亨,「大口徑投影機鏡頭設計投射在螢幕上之直線鑑別率、橫向色差鑑別率、相對照度與MTF並對溫度變化作分析」,中央大學,碩士論文,民國109年。
[31]Synopsys, Code V Electronic Document Library, Version 10.5, Lens System Setup for Reference Manuals, Chap. 1 (2012).
[32]Eugene Hecht, Optics, chap. 6, 4th ed., Addison-Wesley, 2001.
[33]黃前銘,「四百萬畫素DLP大口徑投影機鏡頭設計與溫度、電視畸變、橫向色差、相對照度之探討」,中央大學,碩士論文,民國107年。
[34]Synopsys Inc., “Lens System Setup,” Code V Reference Manuals, Version 11.5.
[35]Synopsys Inc., “Tolerancing,” Code V Reference Manuals, Version 11.5.
指導教授 孫文信(Wen-Shing Sun) 審核日期 2022-8-23
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明