博碩士論文 995202069 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:21 、訪客IP:18.116.118.244
姓名 張鈞為(Jun-wei Zhang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 改良影像式倒車導引與全周俯瞰監視
(Refining Image-based Parking Guiding andSurrounding Top-view Monitoring)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 汽車交通事故發生的主要因素是駕駛人在車輛行進中沒有注意到障
礙物而產生的。特別是因為後照鏡死角及車身遮蔽所造成的視線死角,
更是許多駕駛人無法察看的區域;常常會在這個死角範圍內發生擦撞造
成車體損壞、人員受傷。為提高停車時的安全性,並減少停車所需的時
間。我們提出一套改良影像式倒車導引與全周俯瞰監視系統,並將之部
份實現於嵌入式系統中。整個系統共包含兩部份:一是影像式倒車導引
用於協助駕駛調整方向盤停入車位中,二是全周俯瞰監視用於輔助駕駛
觀看車體周遭的狀況。
影像式倒車導引系統則是以車尾影像估計光流,藉由光流濾除及累
積後,計算出車輛前輪轉角,繪製出行車軌跡。而全周俯瞰監視系統在
車輛四邊架設廣角相機以拍攝車輛週遭影像,經過離線處理的相機校正、
扭曲校正、暗角校正,俯瞰轉換後,得到四張俯瞰影像的相對關係。再
以一部相機由上方拍攝車輛周圍的特徵,將俯瞰影像快速對位成一張俯
視車輛週邊的全周俯瞰影像,最後計算色彩混合權重將各項參數建立一
張查找表,在線上處理階段則先以直方圖均勻化調整影像的亮度分佈,
再根據查找表查表內插與暗角消除影像,產生全周俯瞰影像。
後視倒車導引在輸出解析度為 720×480 的解析度下,在 Intel?
Core™2 Duo 2.83GHz 及 3GB RAM 的個人電腦上可達每秒 170 張。在
Texas Instruments? DaVinci™ DM3730 1GHz Digital Media Processor 開
發板上達每秒 12 張的處理速度。全周俯瞰倒車導引在 Intel? Core™2
Duo 2.83GHz 及 3GB RAM 的個人電腦上可達每秒 43 張。
摘要(英) The main reason for car accidents is that drivers couldn’t see the area
around the vehicles. In order to avoid accidents while driving and to reduce
the time for parking. We propose a refining image-based parking guiding
system and a surrounding top-view monitoring system for parking assistance.
Image-based parking guidance system uses the rear view image to
estimate, filter, and accumulated optical flows to calculate the front wheel
angle of vehicle, and then draws the driving trajectory.
The surrounding top-view monitoring system has four wide-angle
cameras mounted on front, rear, and the both sides of the vehicle to capture
images. The system consists of off-line and on-line processes. In the offline
process, we calculate the camera intrinsic and extrinsic parameters and then
estimate parameters of distortion model and vignetting model for distortion
correction and vignetting compensation. Then we estimate the homography
matrices of four cameras by a top-view image. Lastly, we calculate the feature
weights of overlapped regions and bulid a lookup table for the on-line process.
In on-line process, we use histogram equalization to adjust brightness, and
then interpolate the top-view image, compensate for vignetting effect, and
blend the overlapped regions. In our experiment, the frame rate of
Image-based parking guidance system is 170 frames per second.
關鍵字(中) ★ 相機參數校正
★ 影像對位
★ 鳥瞰轉換
★ 影像扭曲校正
★ 光流
★ 影像暗角補償
關鍵字(英) ★ distortion corr
★ camera calibratio1n
★ optical flow
論文目次 摘要 ................................................................................................................ i
Abstract ......................................................................................................... ii
致謝 .............................................................................................................. iii
目錄 .............................................................................................................. iv
圖目錄 ......................................................................................................... vii
表目錄 ........................................................................................................... x
第一章 緒論 ................................................................................................. 1
1.1 動機 .......................................................................................... 1
1.2 系統概述 .................................................................................. 2
1.3 論文架構 .................................................................................. 3
第二章 相關研究 ......................................................................................... 5
2.1 車輛環場監視系統 .................................................................. 5
2.2 倒車輔助系統 ........................................................................ 13
2.3 特徵點偵測 ............................................................................ 16
第三章 快速全周俯瞰監視系統 ............................................................... 18
3.1 相機參數校正 ........................................................................ 18
3.1.1 相機模型 .................................................................... 18
3.1.2 相機參數校正方法 .................................................... 22
3.1.3 內部參數的條件限制式 ............................................ 23
3.1.4 求解內部參數 ............................................................ 24
3.1.5 求解外部參數 ............................................................ 25
3.1.6 估計最佳解 ................................................................ 25
3.2 鏡頭扭曲校正 ........................................................................ 26
3.2.1 鏡頭扭曲模式 ............................................................ 27
3.2.2 估測鏡頭扭曲參數 .................................................... 28
3.2.3 估計鏡頭扭曲參數最佳解 ........................................ 29
3.3 暗角參數校正 ........................................................................ 29
3.3.1 廣角鏡頭暗角模式 .................................................... 29
3.3.2 估測暗角模式參數 .................................................... 30
3.3.3 消除暗角效應 ............................................................. 31
3.4 快速影像對位 ........................................................................ 32
3.4.1 平面投影轉換模式 .................................................... 32
3.4.2 特徵點對應求解平面投影轉換 ................................ 33
3.4.3 影像對應關係 ............................................................ 36
3.4.4 影像對位 .................................................................... 37
3.5 亮度調整與色彩內插 ............................................................ 38
3.5.1 亮度調整 ..................................................................... 39
3.5.2 色彩內插 .................................................................... 40
3.5.3 色彩混合 ..................................................................... 43
第四章 影像式倒車導引 ........................................................................... 45
4.1 車輛運動軌跡模式 ................................................................ 45
4.2 角點偵測 ................................................................................ 46
4.3 以最小平方誤差估計法計算車體動向 ............................... 48
4.4 光流向量估計 ........................................................................ 51
4.5 光流過濾 ................................................................................ 52
4.5.1 以角度來濾除光流 .................................................... 52
4.5.2 以平方誤差值來濾除光流 ........................................ 53
4.6 計算車體動向 ........................................................................ 54
4.7 導引線繪製 ............................................................................ 55
第五章 實驗 ............................................................................................... 56
5.1 實驗環境 ................................................................................ 56
5.2 相機校正與扭曲校正 ............................................................ 58
5.3 暗角校正 ................................................................................ 59
5.4 快速全周俯瞰轉換與影像融合 ........................................... 59
5.5 亮度調整的全周俯瞰監視影像 ........................................... 61
5.6 全周俯瞰倒車導引 ................................................................ 61
5.6.1 後視俯瞰倒車導引 .................................................... 61
5.6.2 全周俯瞰倒車導引 .................................................... 63
第六章 結論與未來展望 ........................................................................... 66
6.1 結論 ........................................................................................ 66
6.2 未來展望 ................................................................................. 67
參考文獻 ..................................................................................................... 68
參考文獻 [1] BMW, Park Assistant., in
http://www.fujitsu.com/us/news/pr/fma_20101019-02.html
[2] Brown, D. C., “Close-range camera calibration,” Photogrammetric
Engineering, vol.37, no.8, pp.855-866, 1971.
[3] Devernay, F. and O. Faugeras, "Straight lines have to be straight,"
Machine Vision and Applications, vol.13, no.1, pp.14-24, 2001.
[4] Ehlgen, T. and T. Pajdla, "Monitoring surrounding areas of truck-trailer
combinations," in Proc. 5th Int. Conf. on Computer Vision Systems,
Bielefeld, Germany, Mar.21-24, 2007, CD-ROM.
[5] Ehlgen, T., M. Thorn, and M. Glaser, "Omnidirectional cameras as
backing-up aid," in Proc. IEEE 11th Int. Conf. on Computer Vision,
Rio de Janeiro, Brazil, Oct.14-20, 2007, pp.1-5.
[6] Faig, W., “Calibration of close-range photogrammetry systems:
Mathematical formulation,” Photogrammetric Engineering and Remote
Sensing, vol.41, no.12, pp.1479-1486, 1975.
[7] Faugeras, O., T. Luong, and S. Maybank, “Camera self-calibration:
Theory and experiments,” in Proc. of 2nd European Conf. on Computer
Vision, Santa Margherita Ligure, Italy, May 19-22, 1992, vol.588,
pp.321-334.
[8] Fleck, M. M., Perspective Projection: The Wrong Imaging Model,
Technical Report TR 95-01, Computer Science, University of Iowa,
1995.
[9] Fujitsu, 360-Degree Wrap-around Video Imaging Technology, in
http://www.fujitsu.com/us/news/pr/fma_20101019-02.html
[10] Gandhi, T. and M. M. Trivedi, "Vehicle surround capture: survey of
techniques and a novel omni-video-based approach for dynamic
panoramic surround maps," IEEE Trans. on Intelligent Transportation
Systems, vol.7, no.3, pp.293-308, 2006.
[11] Gennery, D., “Stereo-camera calibration,” in Proc. of 10th Image
Understanding Workshop, Los Angeles, CA, Nov.7-8, 1979,
pp.101-108.
[12] Geyer, C. and K. Daniilidis, “Catadioptric projective geometry,” Inte.
Journal of Computer Vision, vol.45, no.3, pp.223-243, 2001.
[13] Hartley, R. and A. Zisserman, Multiple View Geometry in Computer
Vision, 2nd Edition, Cambridge University Press, 2004.
[14] Harris, C. and M. Stephens, "A combined corner and edge detector," in
Proc. 4th Alvey Vision Conf., Manchester, UK, Aug.30-Sep.2, 1988,
pp.147-152.
[15] Honda, Multi-view Camera System, in
http://world.honda.com/news/2008/4080918Multi-View-Camera-Syste
m/
[16] Jung, H.-G., D.-S. Kim, P.-J. Yoon, and J. Kim, "Parking slot markings
recognition for automatic parking assist system," in Proc. IEEE
Intelligent Vehicles Symp., Tokyo, Japan, Jun.13-15, 2006, pp.106-113.
[17] Jung, H. G., D. S. Kim, P. J. Yoon, and J. Kim, “Light stripe projection
based parking space detection for intelligent parking assist system,”
IEEE Intelligent Vehicles Symp., Istanbul, Turkey, Jun.13-15, 2007, pp.
962-968.
[18] Kang, S. B. and R. S. Weiss, "Can we calibrate a camera using an
image of a flat, textureless lambertian surface?," in Proc. 6th European
Conf. on Computer Vision, Dublin, Ireland, Jun.26-Jul.1, 2000,
pp.640-653.
[19] Liu, Y. C., K. Y. Lin, and Y. S. Chen, "Bird’’s-eye view vision system
for vehicle surrounding monitoring," in Proc. 2nd Int. Conf. Robot
Vision, Berlin, Germany, Feb.20-22, 2008, pp.207-218.
[20] Lucas, B. D. and T. Kanade, "An iterative image registration technique
with an application to stereo vision," in Proc. 7th Int. Joint Conf. on
Artificial Intelligence, Vancouver, Canada, 1981, pp.674-679.
[21] Marquardt, D., "An algorithm for least-squares estimation of nonlinear
parameters," SIAM Journal on Applied Mathematics, vol.11,
pp.431-441, 1963.
[22] Milliken, W. F. and D. L. Milliken, Race car vehicle dynamics, SAE,
Warrendale, PA, 1995, pp.715.
[23] Moravec, H., "Towards automatic visual obstacle avoidance," in Proc.
Int. Joint Conf. on Artificial Intelligence, Cambridge, MA, Aug.22-25,
1977, pp.584.
[24] Nissan, Around View Monitor, in
http://www.nissan-global.com/EN/TECHNOLOGY/INTRODUCTION
/DETAILS/AVM/
[25] Rosten, E. and T. Drummond, "Fusing points and lines for high
performance tracking," in Proc. 10th IEEE Int. Conf. on Computer
Vision, Beijing, China, Oct.17-20, 2005, pp.1508-1515.
[26] Rosten, E., R. Porter, and T. Drummond, "Faster and better: a machine
learning approach to corner detection," IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol.32, no.1, pp.105-119, 2010.
[27] Slama, C. C., editor., Manual of Photogrammetry, 4th edition,
American Society of Photogrammetry and Remote Sensing, Falls
Church, Virginia, 1980.
[28] Wada, M., K. S. Yoon, and H. Hashimoto, “Development of advanced
parking assistance system,” IEEE Trans. on Industrial Electronics,
vol.50, no.1, pp. 4-17, 2003.
[29] Wei, G. and S. Ma, “A complete two-plane camera calibration method
and experimental comparisons,” in Proc. of 4th Int. Conf. on Computer
Vision, Berlin, Germany, May 11-14, 1993, pp.439-446.
[30] Weng, J., P. Cohen, and M. Herniou, “Camera calibration with
distortion models and accuracy evaluation,” IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol.14, no.10, pp.965-980, 1992.
[31] Zhang, Z., "A flexible new technique for camera calibration," IEEE
Trans. on Pattern Analysis and Machine Intelligence, vol.22, no.11,
pp.1330-1334, 2000.
指導教授 曾定章(Din-Chang Tseng) 審核日期 2012-7-16
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明