博碩士論文 965202032 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:30 、訪客IP:3.142.255.252
姓名 何祈磊(Chi-lei Ho)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 具有影像優化處理的環場鳥瞰監視停車輔助系統
(A surrounding bird-view monitor system with image refinement for parking assistance)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著科技的進步,民眾對行車安全的要求也愈來愈高。在停車或倒車的過程中駕駛通常需要轉頭或是利用後照鏡來看車輛後方與側後方區域。但車體的結構與後照鏡的角度會影響後視視野而導致有部份盲點且同時難以注意車輛週遭其餘區域的情況,使得駕駛無法全面顧及車輛週遭環境。為提高停車時的便利性與安全性,我們提出一套環場鳥瞰停車輔助系統,駕駛者無需轉頭即可由合成的環場鳥瞰影像了解週遭是否有障礙物及障礙物與車輛的相對位置,可大幅提升行車的安全。此系統主要工作包含了相機參數校正、影像扭曲校正、影像暗角補償、鳥瞰轉換、鳥瞰影像對位、亮度調整、及色彩混合等步驟。
本系統在車輛四周架設四個廣角相機以拍攝車輛週遭影像,但廣角相機雖然有寬廣的視角但也有嚴重的成像扭曲與暗角效應。我們利用基於平面校正板的相機參數校正技術,求得扭曲校正與消除暗角效應所需之相機內部參數,並利用平面投影轉換將影像轉換成鳥瞰視角的影像。接著偵測鳥瞰影像中的角點,藉由角點匹配獲得四張鳥瞰影像間的幾何轉換關係,將鳥瞰影像拼接為一張俯視車輛週遭的全周鳥瞰影像。最後,使用分析各影像的亮度變異,將四張影像亮度調整成一致,再將重疊區域的色彩混合,完成環場鳥瞰影像的接合。本系統的即時運算程序在Intel? Pentium? Core2 Duo 2.66GHz及1.99GB RAM的個人電腦上可達每秒26張。
摘要(英) To park cars in somewhere, drivers usually need to turn their heads or look through the side-view and rear-view mirrors to watch the area behind the vehicles. Due to the limitation of field of view, drivers are mostly unable to see all the area around the vehicles. For the safety of drivers, we present a real-time surrounding bird-view monitoring system for parking assistance. We mount four wide-angle cameras at the front, rear, and both sides of the vehicle to capture consecutive four-in-one images to synthesize the surrounding bird-view images. The studying works consist of off-line and on-line processes.
In the off-line processing, we calculate the camera intrinsic and extrinsic parameters and then estimate parameters of distortion model and vignetting model for distortion correction and vignetting compensation. We obtain four bird-view images by computing the homography matrices between every image plane and a virtual image plane which is parallel to the horizontal. Based on the feature points on these images, we register four bird-view images to produce a surrounding bird-view image. Furthermore, we build the lookup table for recording the mapping between the images captured by the wide-angle cameras and the surrounding bird-view image to speed up the on-line processing.
In the on-line processing, we use a bi-linear interpolation method to obtain the color value of every pixel of the bird-view surrounding images, and then we compensate for vignetting effect, uniform illumination, and blend the color of the overlapped regions to obtain the seamless images with smooth illumination. Finally, we provide the driver with the surrounding bird-view images to monitor the surrounding area of the vehicle.
關鍵字(中) ★ 亮度調整
★ 色彩混合
★ 影像對位
★ 鳥瞰轉換
★ 影像暗角補償
★ 影像扭曲校正
★ 相機參數校正
關鍵字(英) ★ illumination uniformity
★ homography
★ image stitching
★ color blending
★ camera calibration
★ vignetting compensation
★ distortion correction
論文目次 摘要 II
Abstract III
誌謝 IV
目錄 V
圖目錄 VII
表目錄 XI
第一章 緒論 1
1.1 動機 1
1.2 系統概述 2
1.3 論文架構 4
第二章 相關研究 5
2.1 車輛環場監視系統 5
2.2 鳥瞰轉換 8
2.3 影像接合 11
第三章 相機校正 17
3.1 相機參數校正 17
3.1.1 相機模型 17
3.1.2 相機參數校正方法 21
3.1.3 校正模型與影像平面的投影轉換 21
3.1.4 內部參數的條件限制式 22
3.1.5 求解內部參數 22
3.1.6 求解外部參數 24
3.1.7 估計最佳解 24
3.1.8 相機參數校正總結 25
3.2 鏡頭扭曲校正 25
3.2.1 扭曲模型 26
3.2.2 估測扭曲參數 27
3.2.3 估計最佳解 28
3.3 影像暗角參數校正 29
3.3.1 暗角模型 29
3.3.2 估測暗角模型參數 30
3.3.3 調整各點亮度消除暗角效應 31
第四章 鳥瞰轉換與接合 32
4.1鳥瞰轉換 32
4.1.1 以內外部參數作鳥瞰轉換 33
4.1.2 以特徵點對應直接求解平面投影轉換 34
4.2 影像對位 36
4.2.1 幾何轉換 37
4.2.2 角點偵測 38
4.2.3 特徵匹配 41
4.2.4 計算剛性轉換 42
4.3 建表與內插 44
4.4 亮度一致化 46
4.5 色彩混合 48
第五章 實驗 49
5.1 實驗器材與架設環境 49
5.2 相機校正 51
5.3 鳥瞰轉換 54
5.4 影像對位 57
5.5 暗角效應補償與亮度調整 59
5.6 實驗平台與結果 60
第六章 結論與未來展望 65
6.1 結論 65
6.2 未來展望 66
參考文獻 67
參考文獻 [1]Agarwala, A., M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage, ” in Proc. ACM SIGGRAPH, Los Angeles, California, Aug. 2004, pp.294-302.
[2]Bertozzi, M. and A. Broggi, “GOLD: a parallel real-time stereo vision system for genericobstacle and lane detection,” IEEE Trans. on Image Processing, vol.7, no.1, pp.62-81, Jan. 1998.
[3]Bertozzi, M., A. Broggi, P. Medici, P. P. Porta, and A. Sjogren, “Stereo vision-based start-inhibit for heavy goods vehicles,” in Proc. of IEEE Intelligent Vehicles Symposium, Tokyo, Japan, Jun.13-15, 2006, pp.350-355.
[4]Brown, M. and D. G. Lowe, “Recognising Panoramas,” in Proc. 9th IEEE International Conf. on Computer Vision, Nice, France, Oct.13-16, 2003, pp.1218-1225.
[5]Burt, P. J. and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Trans. on Graphics, vol.2, pp.217-236, Oct. 1983.
[6]Davis, J. “Mosaics of scenes with moving objects,” in Proc. IEEE Computer Vision and Pattern Recognition, Santa Barbara, CA, Jun.23-25, 1998, pp.354-360.
[7]Devernay, F. and O. Faugeras, “Straight lines have to be straight,” Machine Vision and Application, vol.13, no.1, pp.14-24, 2001.
[8]Ehlgen, T. and T. Pajdla, “Monitoring surrounding areas of truck-trailer combinations,” in Proc. of 5th Int. Conf. on Computer Vision Systems, Bielefeld, Germany, Mar.21-24, 2007, CD-ROM.
[9]Ehlgen, T., M. Thorn, and M. Glaser, “Omnidirectional cameras as backing-up aid,” in Proc. of IEEE Int. Conf. on Computer Vision., Rio de Janeiro, Brazil, Oct.14-21, 2007, pp.1-5.
[10]Faugeras, O., T. Luong, and S. Maybank. “Camera self-calibration: theory and experiments,” in Proc 2nd ECCV of Lecture Notes in Computer Science, vol. 588 pp.321–334, 1992.
[11]Fischler, M. A. and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of The ACM, vol.24, issue.6, pp.381-395, Jun. 1981.
[12]Fraser, C. S., “Digital camera self-calibration,” Journal of Photogrammetric & Remote Sensing, vol.52, pp.149-159, 1997.
[13]Gandhi, T. and M. M. Trivedi, “Dynamic panoramic surround map: motivation and omni video based approach,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Washington DC, Jun.20-26, 2005, pp.61-69.
[14]Gandhi, T. and M. M. Trivedi, “Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera,” Machine Vision and Applications, vol.16, no.2, pp.85-95, 2005.
[15]Harris, C. and M. Stephens, “A combined corner and edge detector,” in Proc. of The Fourth Alvey Vision Conference, Manchester, Aug.31-Sep.2, 1988, pp.147-151.
[16]Jung, H. G., D. S. Kim, P. J. Yoon, and J. Kim, "Parking slot markings recognition for automatic parking assist system," in Proc. IEEE Intelligent Vehicle Symp., Tokyo, Japan, Jun.13-15, 2006, pp.106-113.
[17]Kang, S. B. and R. Weiss, “Can we calibrate a camera using an image of a flat, textureless lambertian surface?,” in Proc. European Conference on Computer Vision, Dublin, Ireland, July. 2000, pp.640–653.
[18]Levin, A., A. Zomet, S. Peleg, and Y. Weiss, “Seamless Image Stitching in the Gradient Domain,” in Proc. 8th European Conf. on Computer Vision, Prague, Czech Republic, May.11-14, 2004, pp.377-389.
[19]Liu, Y. C., K. Y. Lin, and Y. S. Chen, “Bird’s-eye view vision system for vehicle surrounding monitoring,” in Proc. Conf. Robot Vision, Berlin, Germany, Feb. 20-22, 2008, pp.207-218.
[20]Lowe, D. G., "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol.60, issue.2, pp.91-110, Nov. 2004.
[21]Marquardt, D. “An algorithm for least-squares estimation of nonlinear parameters,” SIAM Journal on Applied Mathematics, vol.11, pp.431–441, 1963.
[22]Mikolajczyk, K. and C. Schmid, “A Performance Evaluation of Local Descriptors,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.27, no.10, pp1615-1630, Oct. 2005.
[23]Moravec, H. P., “Towards automatic visual obstacle avoidance,” in Proc. 5th International Joint Conference on Artificial Intelligence, Tokyo, 1977, pp.584.
[24]Reinhard, E., M. Adhikhmin, B. Gooch, and P. Shirley, “Color transfer between images,” IEEE Computer Graphics and Applications, vol.21, no.5, pp.34-41, 2001.
[25]Ruderman, D. L., T. W. Cronin, and C. C. Chiao, “Statistics of cone responses to natural images: implications for visual coding,” Journal of the Optical Society of America A, vol.15, no.8, pp.2036-2045, 1998.
[26]Szeliski, R. and H.-Y. Shum, “Creating full view panoramic image mosaics and environment maps,” in Proc. ACM SIGGRAPH, Los Angeles, CA, Sep. 1997, pp.251-258.
[27]Tsai, R. Y., “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf tv cameras and lenses,” IEEE Journal of Robotics and Automation, vol.3, no.4, pp.323-344, 1987.
[28]Uyttendaele, M., A. Eden, and R. Szeliski, “Eliminating ghosting and exposure artifacts in image mosaics,” in Proc. IEEE Computer Vision and Pattern Recognition, Kauai Marriott, Hawaii, Dec.9-14, 2001, pp.509-516.
[29]Weng, J., P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.14, no.10, pp.965-980, 1334, 1992.
[30]Zhang, Z., “A flexible new technique for camera calibration,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.22, no.11, pp.1330-1334, 2000.
指導教授 曾定章(Din-chang Tseng) 審核日期 2009-7-21
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明