博碩士論文 975904601 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:24 、訪客IP:18.218.129.100
姓名 阮門督(Minh-Duc Nguyen)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 利用隱式型態模式之高速公路前車偵測機制
(A Highway Preceding Vehicle Detection Scheme By Using Implicit Shape Model)
相關論文
★ 基於QT之跨平台無線心率分析系統實現★ 網路電話之額外訊息傳輸機制
★ 針對與運動比賽精彩畫面相關串場效果之偵測★ 植基於向量量化之視訊/影像內容驗證技術
★ 植基於串場效果偵測與內容分析之棒球比賽精華擷取系統★ 以視覺特徵擷取為基礎之影像視訊內容認證技術
★ 使用動態背景補償以偵測與追蹤移動監控畫面之前景物★ 應用於H.264/AVC視訊內容認證之適應式數位浮水印
★ 棒球比賽精華片段擷取分類系統★ 利用H.264/AVC特徵之多攝影機即時追蹤系統
★ 基於時間域與空間域特徵擷取之影片複製偵測機制★ 結合數位浮水印與興趣區域位元率控制之車行視訊編碼
★ 應用於數位智權管理之H.264/AVC視訊加解密暨數位浮水印機制★ 基於文字與主播偵測之新聞視訊分析系統
★ 植基於數位浮水印之H.264/AVC視訊內容驗證機制★ 利用隱式型態模式之自適應車行監控畫面分析系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 發展實用的駕駛輔助系統來確保駕駛安全已日漸成為一項重要的課題。駕駛者在高速公路上所面臨的主要危險來自於與前車未保持適當距離而導致可能的車輛碰撞。因此,得知與前車以及周圍車輛的相對位置將可大幅降低此種危險。在本論文中,我們發展一個高速公路前車偵測/追蹤機制,設計以單眼視覺為基礎之系統用來偵測前方車輛位置。我們的方法主要根據一種型體之方法論,即隱含式外型模型。我們先由實際場景得到欲用來訓練的圖像,在藉此建立一個碼簿。這些收集到的訓練圖像可被區分成三個部份:完整拍攝車後的景象,部分從左方拍攝車後的景象,及部分從右方拍攝車後的景象。透過使用尺度不變特徵轉換 (SIFT) 擷取興趣點,我們可以擁有代表前方車輛的良好特徵。接著,我們利用群聚的方式集群這些特徵來建立我們的碼簿。為了偵測以及追蹤物件,我們再次使用尺度不變特徵轉換在實際場景上。在每一個場景中,我們比較擷取出來的特徵與碼簿以找出匹配的代表特徵。一旦模型建立完成,我們可以根據尺度以及在模型內指示出的位置辨識出有興趣區間 (ROI) 。我們可以繼續搜尋先前定義的左邊車輛ROI以及右邊車輛ROI以偵測更多可能的車輛。實驗結果顯示車輛可以在三個區域中被偵測。
摘要(英) Developing a practical driver assistance system for ensuring driving safety has become an increasingly important issue. The major risk of driving on the highway comes from possible collisions of the vehicle with the preceding one because a suitable distance is not well maintained. Therefore, knowing the relative position of the preceding vehicle and the surrounding cars should significantly reduce the risks. In this thesis, we would like to develop a highway preceding vehicle detection/tracking scheme, in which a monocular vision-based system for detecting the preceding vehicle in close and mid-range view will be designed to help provide a better view for the drivers.
Our approach is based on an appearance-based methodology, i.e. Implicit Shape Model. A codebook is built for vehicle detection and tracking by using the training images captured from the real scenes. The collection of training images are divided into three parts: fully rear view, partially rear view from left and from the right sides. By applying scale-invariant feature transform (SIFT) to extract the interest points, we have a set of good features presenting the preceding vehicles. Then, we group those features to build up the codebook by clustering. Three models will thus be constructed. For detection and tracking the objects, we apply SIFT detector again in the real scenes. In each scene, we compare the extracted features with the codebook to find its matched representative features. Once a model is found, we can identify the ROI based on the scale and position indicated in the models. We can continue searching for the left and right side of pre-identified ROI to detect more possible vehicles. The experimental results show that vehicles can be detected in each of the three areas, i.e. right in front of the diver and his left/right-hand side areas.
關鍵字(中) ★ 車偵測 關鍵字(英) ★ Implicit Shape Model
★ Vehicle detection
論文目次 摘要. i
Abstract. ii
Acknowledgments iii
List of Figures. vi
Explanation of Symbols viii
Chapter 1: Introduction . 1
1.1 Background. 1
1.2 Thesis objective 2
1.3 Problem conditions . 3
1.4 Thesis organization. 3
Chapter 2: The Related works 4
2.1 Knowledge based methods. 4
2.1.1 Color. 4
2.1.2 Corners and Edges 6
2.1.3 Symmetry . 8
2.1.4 Shadow. 9
2.1.5 Texture . 10
2.1.6 Vehicle Lights 10
2.2 Stereo based method. 11
2.3 Motion based methods 13
2.4 Summary. 14
Chapter 3: Vehicle Detection 15
3.1 Implicit Shape Model . 15
3.1.1 Building ISM codebook . 15
3.1.2 ISM Recognition 18
3.2 Summary. 20
Chapter 4: The Proposed Scheme . 21
4.1 Building ISM codebook . 21
4.1.1 Collecting features from training images. 21
4.1.2 Clustering features . 23
4.1.3 Training codebook . 24
4.2 Recognition vehicle 25
4.2.1 Create voting space 26
4.2.2 Hypothesis Generation . 28
4.2.3 Bounding box . 29
Chapter 5: Experimental results 31
Chapter 6: Conclusions and Future Works . 34
6.1 Conclusions 34
6.2 Suggestions for future works 34
Bibliographies 35
參考文獻 [1] National Transportation Safety Board, Annual Report to Congress, 2009 Annual Report NTSB/SPC -10/01 (http://www.ntsb.gov/Publictn/2010/SPC1001.htm)
[2] Department of Statistic, Ministry of The Interior, ROC, Road Traffic Accidents, Police Administration, Statistic Yearbook of Interior (http://sowf.moi.gov.tw/stat/year/y06-11.xls)
[3] Peden, M. et al. 2004, World report on road traffic injury prevention: summary.
[4] Sun, Z., Bebis, G., Miller, R. 2006, On-Road Vehicle Detection: A Review, IEEE Transactions on pattern analysis and machine intelligence, vol.28, no.5, pp.694-711.
[5] Guo, D., Fraichard, T., Xie, M., Laugier, C. 2000, Color Modeling by Spherical Influence Field in Sensing Driving Environment, IEEE IntelligentVehicle Symp., pp.249-254.
[6] S.D. Buluswar and B.A. Draper 1998, Color Machine Vision for Autonomous Vehicles, Int’l J. Eng. Applications of Artificial Intelligence, vol.1, no.2, pp.245-256, 1998.
[7] Jung, C., Schramm, R. 2004, Rectangle detection based on a windowed Hough transform, Proceedings of the 17th Brazilian Symposium on Computer Graphics and Image Processing.
[8] Kim, S., Kim, K et al. 2005, Front and Rear Vehicle Detection and Tracking in the Day and Night Times using Vision and Sonar Sensor Fusion, Intelligent Robots and Systems, 2005 IEEE/RSJ International Conference, pp.2173-2178.
[9] Kalinke, T., Tzomakas, C., Seelen, W. 1998, A Texture-based Object Detection and an adaptive Model-based Classification, Proc. IEEE International Conf. Intelligent Vehicles, pp.143-148.
[10] Sun, Z., Bebis, G., Miller, R. 2006, Monocular Precrash Vehicle Detection: Features and Classifiers, IEEE Transactions on image processing, vol.15, no.7, pp.2019-2034.
[11] Yen-Hsiang Lin 2009, Visual Blind-spot Detection for Lane Change Assistance, Master Thesis, Institute of Computer Science and Information Engineering , National Central University
[12] Bertozzi, M., A. Broggi, A. Fascioli, and S. Nichele 2000, Stereo vision-based vehicle detection, in Proc. IEEE Intelligent Vehicles Sym., Dearborn, MI, Oct.3-5, 2000, pp.39-44.
[13] Mori, H., Charkai, N. 1993, Shadow and Rhythm as Sign Patterns of ObstacleDetection, Proc. Int’l Symp. Industrial Electronics, pp.271-277.
[14] Hoffmann, C., Dang, T., Stiller, C. 2004, Vehicle detection fusing 2D visual features, IEEE Intelligent Vehicles Symposium.
[15] B. Leibe and B. Schiele 2003, Interleaved object categorization and segmentation, in BMVC, pp.759–768.
[16] B. Leibe, A. Leonardis, and B. Schiele 2004, Combined object categorization and segmentation with an implicit shape model. In ECCV’04 Workshop on Statistical Learning in Computer Vision.
[17] B. Leibe, A. Leonardis, and B. Schiele 2008, Robust object detection with interleaved categorization and segmentation, International Journal of Computer Vision, vol.77, no.1-3, pp.259–289.
[18] R. O. Duda and P. E. Hart 1972, Use of the Hough transformation to detect lines and curves in pictures, Commun. ACM, vol.15, no.1, pp.11–15
[19] A. Thomas et al. 2009, Shape-from-recognition: Recognition enables meta-data transfer, Computer Vision and Image Understanding, vol.113, issue 12, pp.1222-1234
[20] E. Seemann et al. 2005, An Evaluation of Local Shape-Based Features for Pedestrian Detection, Proceedings of the British Machine Vision Conference (BMVC), Oxford, UK, pp. 11-20
[21] D. Lowe 2004, Distinctive image features from scale-invariant keypoints. IJCV, vol.60, no.2, pp.91–110.
[22] D. Comaniciu, V. Ramesh, and P. Meer 2001, The Variable Bandwidth Mean Shift and Data-Driven Scale Selection, Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference, vol 1, pp.438 – 445.
[23] Ballard, D.H. 1987, Generalizing the hough transform to detect arbitrary shapes, Readings in computer vision, pp.714-725.
[24] Luke Fletcher, Lars Petersson and Alexander Zelinsky 2003, Driver Assistance Systems based on Vision In and Out of Vehicles, Proceedings of the IEEE Intelligent Vehicles Symposium (IV2003), Columbus, Ohio, USA, June 2003.
[25] S.Kagami, K. Okada, M. Inaba, and H.Inoue 1999, Real-time 3d flow generation system. In Proc. IEEE Int.Conf. on Multisensor Fusion and Integration for Intelligent Systems, Taipei, Taiwan,1999.IEEECom-puterPress.
[26] Zhao, G., Yuta, S. 1993, Obstacle Detection by Vision System For An Autonomous Vehicle, Intelligent Vehicles, pp.31-36.
[27] Giachetti, A., Campani, M., Torre, V. 1998, The Use of Optical Flow for Road Navigation, IEEE Transactions on robotics and automation, vol.14, no.1, pp.34-48.
[28] Shi, J., Tomasi, C. 1994, Good Features to Track, Computer Vision and Pattern Recognition. Proceedings CVPR ’94., IEEE Computer Society Conference, pp.593-600.
指導教授 蘇柏齊(Po-Chyi Su) 審核日期 2010-7-30
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明