博碩士論文 104522007 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:21 、訪客IP:18.189.178.117
姓名 陳卿瑋(Qing-Wei Chen)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 慢速自走車的廣角相機之障礙物偵測與距離估計
(Obstacle Detection and Distance Estimation for A Low-Speed Automatic Vehicle with A Fisheye camera)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 對許多已開發國家而言,汽車是大部分人民生活中最常接觸到的交通工具。常發生在倒車狀態下,駕駛人因為車體構造所造成的視線死角或是一時的疏忽,未能注意到車後方有障礙物或行人,而導致意外發生。因此,越來越多的研究機構及相關廠商都紛紛投入這個研究領域;例如,Mazda的後方車輛監視系統 (rear vehicle monitoring system) 就是來輔助駕駛人來減少意外的發生。
在本論文中,我們提出一個透過廣角鏡頭來進行慢速自走車的障礙物偵測與距離估計系統。因為廣角鏡頭的特性,能夠獲得較多周遭的影像資訊,將其轉換成俯瞰影像,進而使我們容易偵測出有高度的物體,並估計出我們與障礙物的相對距離,進而提早警示駕駛人,避免駕駛者因為一時疏忽或視野範圍有限等因素,而發生破撞的意外。
首先,在俯瞰影像上的障礙物偵測系統中主要分為兩個部分:離線的相機校正與建置查找表及線上的障礙物偵測與距離估計。為了提高之後偵測的正確率,我們會先進行廣角相機的相機校正,再透過所獲得的參數來建置成俯瞰影像的查找表,使我們之後能減少龐大的運算量在轉換俯瞰影像上,能夠快速地進行障礙物偵測。
轉成俯瞰影像後,我們會先在俯瞰影像上計算自我移動向量 (ego-motion),藉此得知我們車輛的移動方向與移動距離。此外,為了找到周遭的障礙物,我們會進行角點偵測,找出較顯著的角點,作為光流估計的基準點,並進行與自我移動向量相似度的比較,藉此找出障礙物候選特徵點。找出候選特徵點後,利用簡單群聚演算法 (simple cluster-seeking algorithm) 來確認障礙物所在位置,並將該位置內的特徵點光流長度與自我移動向量做比較,藉此確認是否為立面物,再依靠距離估計達到警示駕駛者的用意。
最後我們將介紹實驗環境,並分析本系統在各情境下偵測結果;根據數據顯示,在水泥地、柏油路及室內情境中,我們的自我移動向量估計方式與特徵點估計自我移動向量相比,自我移動向量在利用我們的方法後分別修正 89.95%、86.03% 及 89.49% 的誤差。在三種情境中,準確率分別為 81.94 %、72.38% 及 65.15%。在篩選障礙物候選特徵點中,我們透過自我移動向量的長度來決定我們每張影像門檻值的大小,隨著門檻值的提高,障礙物候選特徵點會減少,偵測率與原本相比會有所下降,但也減少false alarm的發生,當門檻值調高至1.5倍的自我移動向量長度時,在這三種情境下的偵測率分別為 77.39%、67.83% 及 60.57%,其中兩者偵測率最差的情境都出現在室內,造成的原因主要是室內的磁磚紋路與其地面較容易反射光源,導致自我移動向量估計錯誤,因此偵測率較低。另外,在群聚時的距離門檻值也會影響障礙物偵測的結果,群聚距離的門檻值所造成的主要影響,是把鄰近的障礙物視為同一個障礙物,或是把一個障礙物偵測為多個障礙物,在本系統中,仍視為有偵測出障礙物,不影響偵測率。
摘要(英)
For most developed countries, a car is one of the most popular transportation devices in daily life, but it easily causes accident when driving backward because of the car′s structure and driver’s view frustum. Therefore, many motor companies and the related parts suppliers have invested the related monitor system, such as Mazda’s rear vehicle monitoring system, and Nissan’s Around view monitor. However, these systems have only the monitor function. They cannot actively rise warning before any possible collision.
In this paper, we install a camera with a fish-eye lens to capture wide-view-angle images to detect obstacles behind the moving vehicle. The images were transformed into top-view images to easily separate the real and false obstacles.
The proposed system consists of two parts. The first part is an off-line camera calibration module. Before the camera is used for 3D measurement, the intrinsic, extrinsic, and distortion parameters of the camera should be corrected to obtain more accurate measurement. A composite calibration method of fisheye model and stereographic projection model is proposed to calibrate the distortion parameters. The second part is an on-line transformation-detection module. After top-view transformation, we estimate the vehicle’s ego-motion to set the vehicle′s forward direction and distance. In addition to find the surrounding obstacles. The obstacle’s corners are extracted to estimate optical flows. At last, the real obstacles are detected by comparing their optical flows and the vehicle’s ego-motion.
In experiments, the proposed system is evaluated. Three scenarios: on cement road, on asphalt road, and in corridor of a large building. The proposed method improves ego-motion error per frame to 89.95% in cement ground, 86.03% in asphalt road, and 89.49% in the building. In three scenarios, the detection rate is 81.94%, 72.38%, and 65.15%, respectively.
關鍵字(中) ★ 相機校正
★ 光流
★ 距離估計
★ 障礙物偵測
關鍵字(英) ★ fisheye calibration
★ optical flow
★ distance estimation
★ obstacle detection
論文目次
摘要 ……………………………………………………………………… ii
Abstract …………………………………………………………………… iv
致謝 ……………………………………………………………………… vi
目錄 ……………………………………………………………………… vii
圖目錄 …………………………………………………………………… ix
表目錄 …………………………………………………………………… xii

第一章 緒論 ……………………………………………………………… 1
1.1研究動機 ………………………………………………………… 1
1.2系統概述 ………………………………………………………… 2
1.3論文架構 ………………………………………………………… 4

第二章 相關研究 ………………………………………………………… 5
2.1 相機校正 ………………………………………………………… 5
2.2 俯瞰影像 ………………………………………………………… 7
2.3 障礙物偵測 …………………………………………………… 12
2.3.1 單眼動態資訊 …………………………………………… 12
2.3.2 雙眼立體視覺法 ………………………………………… 15
2.3.3 靜態資訊機器學習法 …………………………………… 17
2.4 距離估計 ……………………………………………………… 19

第三章 相機校正及俯瞰轉換 …………………………………………… 22
3.1 相機參數校正 …………………………………………………… 22
3.1.1 相機模型 ………………………………………………… 22
3.1.2 相機參數校正方法 ……………………………………… 26
3.1.3 內外部參數的條件限制式 ……………………………… 26
3.1.4 求解內部及外部參數 …………………………………… 27
3.1.5 估計最佳解 ……………………………………………… 29
3.2 鏡頭扭曲校正 …………………………………………………… 30
3.2.1 扭曲模型 ………………………………………………… 30
3.2.2 估計扭曲參數 …………………………………………… 32
3.3 俯瞰轉換 ………………………………………………………… 33
3.3.1 相機內外部參數求解平面投影轉換 …………………… 34
3.3.2 特徵點對應求解平面投影轉換 ………………………… 35
3.4 查找表 …………………………………………………………… 37
3.4.1 內插 ……………………………………………………… 37
3.4.2 建表 ……………………………………………………… 39

第四章 俯瞰影像障礙物偵測及距離估計 ……………………………… 40
4.1 特徵點偵測 ……………………………………………………… 40
4.2 利用移動向量偵測障礙物 ……………………………………… 42
4.2.1 計算光流向量 …………………………………………… 43
4.2.2 計算自我移動向量 (ego-motion) ………………………… 46
4.2.3 篩選候選障礙物特徵點 ………………………………… 46
4.2.4 群聚障礙物特徵點 ……………………………………… 48
4.3 距離估計 ………………………………………………………… 50

第五章 實驗 ……………………………………………………………… 52
5.1 實驗環境 ………………………………………………………… 52
5.2 實驗結果與比較 ………………………………………………… 53

第六章 結論與未來展望 ………………………………………………… 59
6.1 結論 ……………………………………………………………… 59
6.2 未來展望 ………………………………………………………… 60
參考文獻 ………………………………………………………………… 61
參考文獻
[1] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. and Mach. Intell., vol.22, no.11, pp.1330-1334, 2002.
[2] J. Kannala and S. Brandt, “A generic camera model and calibration method for conventional ,wide-angle, and fish-eye lenses,” IEEE Trans. Pattern Anal. and Mach. Intell., vol.28, no.8, pp.1335-1340, 2006.
[3] C. Hughes, M. Glavin, and E. Jones, “Simple fish-eye calibration method with accuracy evaluation,” Electronic Letters on Computer Vision and Images Analysis (ELCVIA), vol.10, no.1, pp.54-62, Nov. 2011.
[4] F. G. Socs, 360-Degree Wrap-Around Video Imaging Technology Toolset Ready for Integration with Latest Graphics SoCs, http://www.fujitsu.com/us/news/pr/fma_20101019-02.html
[5] NISSAN, Around View Monitor, http://www.nissan-global.com/EN/TECHNOLOGY/OVERVIEW/avm.html.
[6] HONDA, 360 Degree Multi-View Camera, https://www.honda.co.nz/technology/safety/multi-view-camera/. .
[7] K. Sung, J. Lee, J. An, and E. Chang, “Development of image synthesis algorithm with multi-camera,” in Proc. IEEE Conf. on Vehicular Technology, Yokohama, Japan, May 6-9, 2012, pp.1-5.
[8] M. A. Sotelo, J. Barriga, D. Fernández, I. Parra, J. E. Naranjo, M. Marrón, S. Alvarez, and M. Gavilán, ”Vision-based blind spot detection using optical flow,” Lecture Notes in Computer Science, vol.4739, pp.1113-1118, 2007.
[9] C. Braillon, C. Pradalier, J. L. Crowley, C. Laugier, L. Gravir, and I. Rhone-alpes, “Real-time moving obstacle detection using optical flow models,” in Proc. Intelligent Vehicles Symp., Tokyo, Japan, Jun.13-15, 2006, pp.466-471.
[10] K. Yamaguchi, “Vehicle ego-motion estimation and moving object detection using a monocular camera,” in Proc. 18th Int. Conf. on Pattern Recognition, Hong Kong, China, Aug.22-24, 2006, pp.610-613.
[11] X. Yu and X. Chen, “Accurate motion detection in dynamic scenes based on ego-motion estimation and optical flow segmentation combined method,” in Proc. Photonics and Optoelectronics Symp., Wuhan, China, May 16-18, 2011, pp.1-4.
[12] O. Chum and J. Matas, “Optimal randomized RANSAC,” IEEE Trans. on Pattern Anal. and Mach. Intell., vol.30, no.8, pp.1472-1482, 2008.
[13] C. Yang, H. Hongo, and S. Tanimoto, “A new approach for in-vehicle camera obstacle detection by ground movement compensation,” in Proc. 11th IEEE Int. Conf. on Intelligent Transportation System, Beijing, China, Oct.12-15, 2008, pp.151-156.
[14] J. Bouguet, Pyramidal Implementation of The Lucas Kanade Feature Tracker Description of The Algorithm, Technique Report, Intel Corporation Microprocessor Research Lab., 2003.
[15] B. D. Lucas and T. Kanade, ”An iterative image registration technique with an application to stereo vision,” in Proc. 7th Int. Joint Conf. on Artificial Intelligence, Vancouver, Canada, Aug.24-28, 1981, pp.674-679.
[16] M. Bertozzi, A. Broggi, P. Medici, P. P. Porta, and A. Sjogren, ”Stereo vision-based start-inhibit for heavy goods vehicles,” in Proc. IEEE Conf. on Intelligent Vehicles Symp., Tokyo, Japan, Jun.13-15, 2006, pp.350-355.
[17] P.-H. Yuan, K. Yang, and W.-H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Vehicle Technology, vol.60, no.8, pp.3603-3614, 2011.
[18] S. Zhang, C. Wang, S.-C. Chan, X. Wei, and C.-H. Ho, “New object detection, tracking, and recognition approaches for video surveillance over camera network,” IEEE Sensors Journal, vol.15, no.5, pp.2679-2691, 2015.
[19] D. Comaniciu, P. Meer, and S. Member, “Mean shift : a robust approach toward feature space analysis,” IEEE Trans. on Pattern Anal. and Mach. Intell., vol.24, no.5, pp.603-619, 2002.
[20] Z. Zivkovic and F. Van DerHeijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction,” Pattern Recognition Letters, vol.27, no.7, pp.773-780, 2006.
[21] S.-C. Chan, B. Liao, K.-M. Tsui, P. Road, and H. Kong, “Bayesian kalman filtering, regularization and compressed sampling,” in Proc. IEEE Conf. on Circuits and Systems (MWSCAS), Seoul, South Korea, Aug.7-10, 2011, pp.1-4.
[22] H.-S. Sandhu, K.-J. Singh, and D.-S. Kapoor, “Automatic edge detection algorithm and area calculation for flame and fire images,” in Proc. IEEE Conf. on Cloud System and Big Data Engineering, Noida, India, Jan.14-15, 2016, pp.403-407.
[23] D. Hoiem, A. A. Efros, and M. Hebert, ”Putting objects in perspective,” Int. Journal of Computer Vision, vol.80, no.1, pp.3-15, 2008.
[24] A. Saxena, S. H. Chung, and A. Y. Ng, ”3-D depth reconstruction from a single still image,” Int. Journal of Computer Vision, vol.76, no.1, pp.53-69, 2008.
[25] M. Collins, R. Schapire, and Y. Singer, “Logistic regression, adaboost and bregman distances,” in Proc. the 13th Annual Conf. on Computational Learning Theory, San Francisico, CA, Jun.27-Jul.1, 2000, pp.1-26.
[26] A. Ali and H. Hussein, “Distance estimation and vehicle position detection based on monocular camera,” in Proc. Al-Sadeq Int. Conf. on Multidisciplinary in IT and Communication Science and Applications, Baghdad, Iraq, May 9-10, 2016, pp.1-4.
[27] C.-H. Chen, T.-Y. Chen, D.-Y. Huang, and K.-W. Feng, “Front vehicle detection and distance estimation using single-lens video camera,” in 3rd Int. Conf. on Robot Vision and Signal Processing (RVSP), Kaohsiung, Taiwan, Nov.18-20, 2015, pp.14-17.
[28] D. Marquardt, ”An algorithm for least-squares estimation of nonlinear parameters,” SIAM Journal on Applied Mathematics, vol.11, pp.431-441, 1963.
[29] H. Moravec, “Obstacle avoidance and navigation in the real world by a seeing robot rover,” in Proc. Int. Joint Conf. on Artificial Intelligence, Cambridge, MA, Aug.22-25, 1977, pp.584-589.
[30] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. 4th Alvey Vision Conf., Manchester, UK, Aug.30-Sep.2, 1988, pp.147-152.
[31] J. Shi and C. Tomasi, “Good features to track” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Seattle, WA, Jun.21-23, 1994, pp.593-600.
指導教授 曾定章 審核日期 2017-8-22
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明