摘要: | 對許多已開發國家而言,汽車是大部分人民生活中最常接觸到的交通工具。常發生在倒車狀態下,駕駛人因為車體構造所造成的視線死角或是一時的疏忽,未能注意到車後方有障礙物或行人,而導致意外發生。因此,越來越多的研究機構及相關廠商都紛紛投入這個研究領域;例如,Mazda的後方車輛監視系統 (rear vehicle monitoring system) 就是來輔助駕駛人來減少意外的發生。 在本論文中,我們提出一個透過廣角鏡頭來進行慢速自走車的障礙物偵測與距離估計系統。因為廣角鏡頭的特性,能夠獲得較多周遭的影像資訊,將其轉換成俯瞰影像,進而使我們容易偵測出有高度的物體,並估計出我們與障礙物的相對距離,進而提早警示駕駛人,避免駕駛者因為一時疏忽或視野範圍有限等因素,而發生破撞的意外。 首先,在俯瞰影像上的障礙物偵測系統中主要分為兩個部分:離線的相機校正與建置查找表及線上的障礙物偵測與距離估計。為了提高之後偵測的正確率,我們會先進行廣角相機的相機校正,再透過所獲得的參數來建置成俯瞰影像的查找表,使我們之後能減少龐大的運算量在轉換俯瞰影像上,能夠快速地進行障礙物偵測。 轉成俯瞰影像後,我們會先在俯瞰影像上計算自我移動向量 (ego-motion),藉此得知我們車輛的移動方向與移動距離。此外,為了找到周遭的障礙物,我們會進行角點偵測,找出較顯著的角點,作為光流估計的基準點,並進行與自我移動向量相似度的比較,藉此找出障礙物候選特徵點。找出候選特徵點後,利用簡單群聚演算法 (simple cluster-seeking algorithm) 來確認障礙物所在位置,並將該位置內的特徵點光流長度與自我移動向量做比較,藉此確認是否為立面物,再依靠距離估計達到警示駕駛者的用意。 最後我們將介紹實驗環境,並分析本系統在各情境下偵測結果;根據數據顯示,在水泥地、柏油路及室內情境中,我們的自我移動向量估計方式與特徵點估計自我移動向量相比,自我移動向量在利用我們的方法後分別修正 89.95%、86.03% 及 89.49% 的誤差。在三種情境中,準確率分別為 81.94 %、72.38% 及 65.15%。在篩選障礙物候選特徵點中,我們透過自我移動向量的長度來決定我們每張影像門檻值的大小,隨著門檻值的提高,障礙物候選特徵點會減少,偵測率與原本相比會有所下降,但也減少false alarm的發生,當門檻值調高至1.5倍的自我移動向量長度時,在這三種情境下的偵測率分別為 77.39%、67.83% 及 60.57%,其中兩者偵測率最差的情境都出現在室內,造成的原因主要是室內的磁磚紋路與其地面較容易反射光源,導致自我移動向量估計錯誤,因此偵測率較低。另外,在群聚時的距離門檻值也會影響障礙物偵測的結果,群聚距離的門檻值所造成的主要影響,是把鄰近的障礙物視為同一個障礙物,或是把一個障礙物偵測為多個障礙物,在本系統中,仍視為有偵測出障礙物,不影響偵測率。 ;For most developed countries, a car is one of the most popular transportation devices in daily life, but it easily causes accident when driving backward because of the car′s structure and driver’s view frustum. Therefore, many motor companies and the related parts suppliers have invested the related monitor system, such as Mazda’s rear vehicle monitoring system, and Nissan’s Around view monitor. However, these systems have only the monitor function. They cannot actively rise warning before any possible collision. In this paper, we install a camera with a fish-eye lens to capture wide-view-angle images to detect obstacles behind the moving vehicle. The images were transformed into top-view images to easily separate the real and false obstacles. The proposed system consists of two parts. The first part is an off-line camera calibration module. Before the camera is used for 3D measurement, the intrinsic, extrinsic, and distortion parameters of the camera should be corrected to obtain more accurate measurement. A composite calibration method of fisheye model and stereographic projection model is proposed to calibrate the distortion parameters. The second part is an on-line transformation-detection module. After top-view transformation, we estimate the vehicle’s ego-motion to set the vehicle′s forward direction and distance. In addition to find the surrounding obstacles. The obstacle’s corners are extracted to estimate optical flows. At last, the real obstacles are detected by comparing their optical flows and the vehicle’s ego-motion. In experiments, the proposed system is evaluated. Three scenarios: on cement road, on asphalt road, and in corridor of a large building. The proposed method improves ego-motion error per frame to 89.95% in cement ground, 86.03% in asphalt road, and 89.49% in the building. In three scenarios, the detection rate is 81.94%, 72.38%, and 65.15%, respectively. |