博碩士論文 965202027 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:34 、訪客IP:3.145.36.10
姓名 郭予中(Yu-Chung Kuo)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 廣域全周俯瞰監視與視覺偵測
(Wide-scoped Surrounding Top-view Monitoring and Visual Detection)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來全周俯瞰監視系統已逐漸應用在減少車輛周遭盲點已降低碰撞事故的駕駛輔助。這些系統提供給駕駛在車輛周圍短距離的視野,而短距離的視野使得這些系統僅能夠使用在倒車及停車這類低速的應用。
我們提出的廣域全周俯瞰監視與視覺偵測包含了一個建置廣域全周俯瞰影像的方法,能夠產生比現有系統更廣的視野;同時並對影像內容進行視覺偵測,偵測出畫面中的障礙物。此廣域全周俯瞰監視與視覺偵測系統包含:相機校正、多部相機影像對位、影像融合、己車移動向量估計、地面特徵移除及障礙物標示等模組。
系統利用安裝於車輛前後以及兩側的相機,經過拍攝校正板對相機影像做校正並估計相機位置後,將影像投影在一個3D模型上,計算影像融合,產生一個連續無縫的全周影像。視覺偵測透過相機所的參數計算出車輛的移動向量,根據移動向量分辨出影像中的障礙物。
我們在柏油路面、水泥路面和地磚路面上對系統進行測試。實驗結果顯示視覺偵測在水泥和地磚路面上能夠有效的移除非障礙物的地面特徵,而對比較低的柏油路面則容易造成車輛移動向量估算的錯誤而造成偵測錯誤。
摘要(英) In recent years, surrounding top-view monitoring systems are becoming a practical driving aid that help reducing collision hazards by eliminating blind spots. Many of such systems provide short range views surrounding the vehicle, limiting its application to parking and reversing. In this paper, we propose a practical system for creating a wide-scoped surrounding imagery around the vehicle and highlighting obstacles in the driving environment. By using a calibration board setup, the cameras we mounted on each side of the vehicle can be calibrated so the projections of the cameras form a continuous surround view on a dual-camber model. The projected imagery gives drivers the freedom to change view-point to suit different driving needs. By estimating the ego-motion of the vehicle using the input image sequence of the cameras, the proposed system is able to detect objects in the images by finding movements of features that do not correspond to ground motion relative to vehicle motion. Detected obstacles are highlighted in the wide-scoped surround top-view imagery to warn the driver of potential hazards.
We tested our system against asphalt, concrete, and tiled road surfaces with obstacles in the scene. The results show while concrete and tiled surface features can be effectively removed, feature-poor asphalt surface is prone to misdetection for errors introduced during calibration and ego-motion estimation.
關鍵字(中) ★ 全周俯瞰監視
★ 視覺偵測
★ 智慧車輛
★ 盲點
★ 車輛安全
關鍵字(英) ★ Wide-scoped Surrounding Top-view Monitoring
★ Visual Detection
★ Intelligent Vehicle
★ Blind-spot
★ Vehicle safety
論文目次 Abstract i
Table of Contents ii
List of Figures v
List of Tables ix
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 System overview 2
1.3 Thesis organization 4
Chapter 2 Related Works 5
2.1 Vehicle surrounding monitoring systems 5
2.1.1 Nissan Around View Monitor 6
2.1.2 Honda Multi-view camera system 7
2.1.3 Bird’s eye vision system for vehicle surrounding monitoring 8
2.1.4 Omnidirectional cameras for backing-up aid 9
2.1.5 Monitoring surrounding areas for tractor-trailer combinations 10
2.1.6 Omni video based approach 11
2.2 Image fusion 12
2.2.1 Panorama construction 13
2.2.2 Interactive digital photomontage 14
2.3 Detection 14
2.3.1 Learning-based methods 15
2.3.2 Stereoscopic methods 18
2.3.3 Monocular methods 21
Chapter 3 Camera Calibration 23
3.1 Camera model 23
3.1.1 Coordinate systems 23
3.1.2 Distortion model 25
3.2 Estimate parameters 28
3.2.1 Intrinsic parameters estimation 29
3.2.2 Extrinsic parameters estimation 30
3.2.3 Distortion parameter estimation 30
3.2.4 Optimizing solution 31
Chapter 4 Wide-scoped Surrounding Top-view Monitoring 32
4.1 Dual-camber modeling 33
4.2 Image registration 34
4.2.1 Vehicle positioning 34
4.2.2 Calibration pattern 35
4.2.3 Estimate extrinsic parameters 38
4.2.4 Calculate camera position and orientation 38
4.3 Model construction 39
4.3.1 Vertex generation 40
4.3.2 Texture coordinates generation 41
4.3.3 Color blending 41
4.3.4 Model shapes 44
Chapter 5 Visual Detection 46
5.1 Ego-motion estimation 48
5.1.1 WI patch 48
5.1.2 Feature detection 49
5.1.3 Find motion vectors 51
5.1.4 Least squares ego-motion estimation 52
5.1.5 Dampening ego-motion 52
5.2 Detection 53
5.2.1 Feature detection 53
5.2.2 Remove surface features 53
5.3 Chracteristics of road surfaces 54
5.3.1 Asphalt and concrete surface 54
5.3.2 Lane markings 56
5.3.3 Tiled surface 56
Chapter 6 Experiments 58
6.1 Environment settings 58
6.1.1 Vehicle 58
6.1.2 Camera setup 59
6.2 Calibration 61
6.2.1 Offline calibration 61
6.2.2 Combined calibration 62
6.3 Optimizations 65
6.3.1 Bird’s-eye view map 65
6.3.2 FOV lookup table 66
6.4 Detection results 66
6.4.1 Surface feature removal 67
6.4.2 Factors affecting surface feature removal 68
6.4.3 Effectiveness at higher speed 70
6.5 Ego-motion accuracy 70
Chapter 7 Conclusion and Future Works 72
References 74
參考文獻 [1] Agarwala, A., M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, "Interactive digital photomontage," ACM Trans. on Graphics, vol.23, no.3, pp.294-302, Aug. 2004.
[2] Agrawal, M., K. Konolige, and M. R. Blas, "CenSurE: Center surround extremas for realtime feature detection and matching," in Proc. of 10th European Conf. on Computer Vision, Marseille, France, Oct.12-18, 2008, pp.102-115.
[3] Alpine, "Alpine HCE-C500 Topview camera system", http://www.alpine-europe.com/p/Products/camera58/hce-c500
[4] Bertozzi, M. and A. Broggi, "GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection," IEEE Trans. on Image Processing, vol.7, no.1, pp.62-81, Jan. 1998.
[5] Bertozzi, M., A. Broggi, P. Medici, and P. P. Porta, "Stereo vision-based start-inhibit for heavy goods vehicles," in Proc. of IEEE Intelligent Vehicles Symp., Tokyo, Japan, June 13-15, 2006, pp.350-355.
[6] Brown, M. and D. G. Lowe, "Recognising panoramas," in Proc. of Ninth IEEE Int. Conf. on Computer Vision, Nice, France, Oct.13-16, 2003, pp.1218-1225.
[7] Chou, T.-H., "Wide-scoped top-view monitoring and image-based parking guiding," Master Thesis, Dept. of Computer Science and Information Engineering, National Central University, Chung-li, Taiwan, 2010.
[8] Devernay, F. and O. Faugeras, "Straight lines have to be straight," Machine Vision and Application, vol.13, no.1, pp.14-24, Aug. 2001.
[9] Ehlgen, T. and T. Pajdla, "Monitoring surrounding areas of truck-trailer combinations," in Proc. of 5th Int. Conf. on Computer Vision Systems, Bielefeld, Germany, Mar.21-24, 2007, pp.207-218.
[10] Ehlgen, T., M. Thom, and M. Glaser, "Omnidirectional cameras as backing-up aid," in Proc. of IEEE 11th Int. Conf. on Computer Vision, Rio de Janeiro, Brazil, Oct.14-21, 2007, pp.1-5.
[11] Enkelmann, W., "Obstacle detection by evaluation of optical flow fields from image sequences," in Proc. of 1st European Conf. on Computer Vision, Antibes, France, Apr.23-27, 1990, pp.134-138.
[12] Fujitsu, "Fujitsu MB86R24 automotive graphics processing LSI", May 2013, http://jp.fujitsu.com/group/fsl/downloads/en/release/20130516e.pdf
[13] Gandhi, T. and M. Trivedi, "Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera," Machine Vision and Applications, vol.16, no.2, pp.85-95, Feb. 2005.
[14] Gandhi, T. and M. M. Trivedi, "Vehicle surround capture: survey of techniques and a novel omni-video-based approach for dynamic panoramic surround maps," IEEE Trans. on Intelligent Transport Systems, vol.7, no.3, pp.293-308, Sep. 2006.
[15] Hoiem, D., A. A. Efros, and M. Hebert, "Putting objects in perspective," Int. Journal of Computer Vision, vol.80, no.1, pp.3-15, Oct. 2008.
[16] Khronos, "OpenCL", http://www.khronos.org/opencl/
[17] Liu, Y.-C., K.-Y. Lin, and Y.-S. Chen, "Bird’s-eye view vision system for vehicle surrounding monitoring," in Proc. of the 2nd Int. Conf. on Robot Vision, Auckland, New Zealand, Feb.18-20, 2008, pp.207-218.
[18] Luxgen, "Luxgen advanced technologies", http://www.luxgen-motor.com/technology-Advanced.html
[19] NVIDIA, "CUDA developer zone", https://developer.nvidia.com/category/zone/cuda-zone
[20] NVIDIA, "Jetson automotive development platform", Apr. 2013, http://www.nvidia.com/object/jetson-automotive-developement-platform.html
[21] Rosten, E. and T. Drummond, "Machine learning for high-speed corner detection," in Proc. of European Conf. on Computer Vision, Graz, Austria, May 7-13, 2006, pp.430-443.
[22] Rosten, E., G. Reitmayr, and T. Drummond, "Fusing points and lines for high performance tracking," in Proc. of IEEE Int. Conf. on Computer Vision, Beijing, China, Oct.17-21, 2005, pp.1508-1511.
[23] Saxena, A., S. H. Chung, and A. Y. Ng, "3-D depth reconstruction from a single still image," Int. Journal of Computer Vision, vol.76, no.1, pp.53-69, Jan. 2008.
[24] Smart, "Smart Automobile", http://www.smart.com/
[26] Spillard, "Spillard Optronics 360", June 2012, http://www.spillard.com/solutions/vision/optronics/optronics360.html
[25] Wikipedia, "Speed limits by country", http://en.wikipedia.org/wiki/Speed_limits_by_country
[27] Zhang, Z., "A flexible new technique for camera calibration," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.22, pp.1330-1334, 2000.
指導教授 曾定章(Din-Chang Tseng) 審核日期 2013-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明