博碩士論文 104522019 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:12 、訪客IP:3.141.32.53
姓名 黃宇杰(Yu-Jie Huang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於稠密光流分析之車門安全警示系統
(Car Door Safety Warning System Based on Dense Optical Flow Analysis)
相關論文
★ 影片指定對象臉部置換系統★ 以單一攝影機實現單指虛擬鍵盤之功能
★ 基於視覺的手寫軌跡注音符號組合辨識系統★ 利用動態貝氏網路在空照影像中進行車輛偵測
★ 以視訊為基礎之手寫簽名認證★ 使用膚色與陰影機率高斯混合模型之移動膚色區域偵測
★ 影像中賦予信任等級的群眾切割★ 航空監控影像之區域切割與分類
★ 在群體人數估計應用中使用不同特徵與回歸方法之分析比較★ 以視覺為基礎之強韌多指尖偵測與人機介面應用
★ 在夜間受雨滴汙染鏡頭所拍攝的影片下之車流量估計★ 影像特徵點匹配應用於景點影像檢索
★ 自動感興趣區域切割及遠距交通影像中的軌跡分析★ 基於回歸模型與利用全天空影像特徵和歷史資訊之短期日射量預測
★ Analysis of the Performance of Different Classifiers for Cloud Detection Application★ 全天空影像之雲追蹤與太陽遮蔽預測
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著近年來汽機車的普及化,而汽機車的數量逐年的提升,然而因為汽機車的數量提升導致交通事故也逐年提升。因此先進駕駛輔助系統(Advanced Driver Assistance Systems,ADAS)被廣泛的使用在汽車上,希望藉由駕駛輔助系統來降低交通事故。
本篇論文關注於駕駛輔助系統中,基於車門後方影像進行事件警示的預防碰撞系統。在學術領域研究方面,大多使用紅外線感測器偵測後方車輛,根據後方車輛與自身車門的距離決定是否警示。然而,這種方法的缺點是當感測器發出警訊時,但此時後方的來車是往遠離車門的方向前進,因而造成誤判。
本篇論文提出一個較彈性的系統架構,自動偵測車門位置,並定義出感興趣區域,偵測區域中後方來車。希望手機或相機架好之後便能夠自動偵測車門後方危險來車。
本論文所提出的系統不依賴感測器偵測方法,而是使用分析相機或手機影像的稠密光流(Dense Optical Flow)作為車門後方事件的特徵。我們以移動物的移動軌跡進行分群,針對每一群軌跡找出特徵向量,並利用自適應增強演算法(Adaboost)和支持向量機(Support Vector Machine)分類器判斷危險事件是否發生。
在實驗中,我們展示本論文提出的系統對於車門後方危險事件偵測有良好的可靠度,並且在影片中危險的事件都能準確的偵測到,達到一台手機或攝影機能夠偵測車門後方來車危險事件之目的,此系統在個人電腦上達到每秒鐘29真幀的實時運作。
摘要(英)
With the popularization of cars and motorcycles in recent years, the number of cars and motorcycles increases year by year. However, the increasing number of cars and motorcycles results in car accidents year after year. Hence, Advanced Driver Assistance Systems is widely used on cars. Wish to reduce the car accidents via the Driver Assistance Systems.

This dissertation focus on Driver Assistance Systems, which is based on analyze source images and detect vehicles in the rear. In academic research, it mostly uses infrared sensors to detect the tailing cars and decides whether to alarm according to the distance between the tailing cars and our own car doors. However, the defect of this method is that it may misjudge because while the sensor is alarming, the approaching car from the rear is moving away from the car door.

This dissertation proposes a rather more flexible system structure. It can sense the position of car door automatically and define a specific area to sense the approaching cars in it. Wish that once a cellphone or a camera is erected, the system can sense dangerous approaching cars from the back of car doors automatically.

The system in this dissertation does not depend on sensor sensing method but analyze the Dense Optical Flow of cameras or cellphone videos as features of incidents at the rear of car doors. We group the tracks of moving objects and find out the eigenvector of each group with the tracks and utilizing Adaboost and Support Vector Machine to determine whether dangerous events will happen.

In experiment, we display that the system, which is proposed in this dissertation has great reliability on detecting dangerous things in the back of car doors. Furthermore, in the video, dangerous events can be detected precisely, achieving the goal that a cellphone or a camera detects the dangerous approaching cars from the back of car doors. From practical operation, this system achieves 29 frames per second on personal computer.
關鍵字(中) ★ 車門安全
★ 稠密光流
★ 先進駕駛輔助系統
關鍵字(英)
論文目次
摘要........... V
Abstract....... VI
致謝........... VIII
圖目錄.......... XI
表目錄.......... XII
第一章 緒論.....1
1.1 研究動機.....1
1.2 相關文獻......3
1.3 系統流程.....5
1.4 論文架構......8
第二章 自動偵測自身車體外殼及感興趣區域...9
2.1 自身車體顏色高斯模型..........9
2.2 感興趣區域...................13
2.3 使用稠密光流法來追蹤移動物....15
第三章 軌跡分群及群聚特徵擷取..... 19
3.1 群聚...............19
3.1.1 距離度量.......... 19
3.1.2 相似性度量..........21
3.1.3 群聚方法............21
3.2 群聚特徵擷取..........22
3.3 群聚特徵訓練..........25
3.3.1 SVM訓練模式.........25
3.3.2 Adaboost訓練模式....28
3.3.3序列狀態轉換及結合SVM和Adaboost預測結果...... 30
第四章 實驗結果與討論.......32
4.1 樣本標記與實驗設備......32
4.1.1 實驗設備.............32
4.1.2 樣本標記.............33
4.2 感興趣區域偵測結果......38
4.3危險事件偵測.............41
4.3.1 測試指標..............41
4.3.2 Adaboost和SVM分類器之實驗比較........41
4.3.3 結合Adaboost和SVM分類器實驗結果......43
4.3.4 樣本標記加上速度和不加上速度實驗比較...52
4.3.5 自行定義危險區域(不使用分類器)........53
第五章 結論與未來研究方向...................56
參考文獻.......57
參考文獻
[1]Jyun-Min Dai,Lu-Ting Wu,Huei-Yung Liny,and Wen-Lung Tai,“A Driving Assistance System with Vision Based Vehicle Detection Techniques”, Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA),pp. 1-9,2016

[2]Yao Deng,Huawei Liang,Zhiling Wang,Junjie Huang,“ A Integrated Forward Collision Warning System Based on Monocular Vision”, IEEE International Conference on Robotics and Biomimetics,pp.1219-1223,2014

[3]Shota Takada,Toshihiro Hiraoka,Hiroshi Kawakami,”Effectiveness of forward obstacles collision warning system based on deceleration for collision avoidance”, IET Intelligent Transport Systems,pp 570-579,2014

[4]Zhipeng Di,Dongzhi He,“Forward Collision Warning System Based on Vehicle Detection and Tracking”,International Conference on Optoelectronics and Image Processing,pp 10-14,2016

[5]Chi-Feng Wu,Cheng-Jian Lin,Chi-Yung Lee,“Applying a Functional Neurofuzzy Network to Real-Time Lane Detection and Front-Vehicle Distance Measurement”,IEEE Transactions on Systems, Man,and Cybernetics,pp 577-589,2012

[6]Yassin Kortli,Mehrez Marzougui,Mohamed Atri,“Efficient Implementation of a Real-Time Lane Departure Warning System”,International Image Processing,Applications and Systems,pp 1-6,2016

[7]Vijay Gaikwad; Shashikant Lokhande, “Lane Departure Identification for Advanced Driver Assistance”, IEEE Transactions on Intelligent Transportation Systems,pp 910-918,2015

[8]D. O. Cualain; M. Glavin; E. Jones,“Multiple-camera lane departure warning system for the automotive environment”,IET Intelligent Transport Systems, pp 223-234,2012


[9]Alexey Vinel,Evgeny Belyaev,Karen Egiazarian,Yevgeni Koucheryavy,”An Overtaking Assistance System Based on Joint Beaconing and Real-Time Video Transmission”,IEEE Transactions on Vehicular Technology,pp2319-2329,2012

[10]Xuetao Zhang,Peilin Jiang,Fei Wang,“Overtaking Vehicle Detection Using A Spatio-temporal CRF”,IEEE Intelligent Vehicles Symposium Proceedings,pp 338-343, 2014

[11]Evgeny Belyaev,Pavlo Molchanov,Alexey Vinel,Yevgeni Koucheryavy,”The Use of Automotive Radars in Video-Based Overtaking Assistance Applications”, IEEE Transactions on Intelligent Transportation Systems ,pp 1035-1042,2013

[12]Jiun-Ren Lin,Timothy Talty,Ozan K. Tonguz,”A Blind Zone Alert System Based on Intra-Vehicular Wireless Sensor Networks”,IEEE Transactions on Industrial Informatics, pp 476-484,2015

[13]Damien Dooley,Brian McGinley,Ciarán Hughes,Liam Kilmartin, Edward Jones,Martin Glavin,”A Blind-Zone Detection Method Using a Rear-Mounted Fisheye Camera With Combination of Vehicle Detection Methods”,pp 264-278,2016

[14]Jingyuan Liu,Bindang Xue,Linyan Cui,“Dense Optical Flow in Stabilized Scenes for
Moving Object Detection from a Moving Camera”,pp 702-707,2016

[15]Cosmin D. Pantilie,Silviu Bota,Istvan Haller,Sergiu Nedevschi,“Real-time Obstacle Detection Using Dense Stereo Vision and Dense Optical Flow”, pp 191-196,2010

[16]Ion Giosan,Sergiu Nedevschi,“Superpixel-based Obstacle Segmentation from Dense Stereo Urban Traffic Scenarios Using Intensity,Depth and Optical Flow Information”, IEEE International Conference on Information and Automation , pp 1662-1668,2014

[17]Sujeong Kim; Aniket Bera; Dinesh Manocha,”Interactive Crowd Content Generation and Analysis using Trajectory-level Behavior Learning”,IEEE International Symposium on Multimedia, pp 21-26,2015

[18]Zhi-Jun Chen,Chao-Zhong Wu,Yi-Shi Zhang,Zhen Huang,Jun-Feng Jiang,Neng-Chao Lyu,Bin Ran,”Vehicle Behavior Learning via Sparse Reconstruction with 2−p Minimization and Trajectory Similarity”,IEEE Transactions on Intelligent Transportation Systems, pp 236-247,2017

[19]Peng Hao,Kanok Boriboonsomsin,Guoyuan Wu,Matthew J.Barth,“Modal Activity-Based Stochastic Model for Estimating Vehicle Trajectories from Sparse Mobile Sensor Data”, IEEE Transactions on Intelligent Transportation Systems, pp 701-711,2017

[20]Brendan Tran Morris,Mohan Manubhai Trivedi,“A Survey of Vision Based Trajectory Learning and Analysis for Surveillance”,IEEE Transactions on Circuits and Systems for Video Technology,pp 1114-1127,2008

[21]Le Xin,Deliang Yang,Yangzhou Chen,Zhenlong Li, “Traffic Flow Characteristic Analysis at Intersections from Multi-layer Spectral Clustering of Motion Patterns using Raw Vehicle Trajectory”,IEEE Conference on Intelligent Transportation Systems , pp 513-519,2011

[22]Geng Zhang,Zejian Yuan,Dapeng Chen,Yuehu Liu,Nanning Zheng,“Video Object Segmentation by Clustering Region Trajectories”,Proceedings of the 21st International Conference on Pattern Recognition, pp 2598-2601,2012

[23]Mei Yeen Choong,Lorita Angeline,Renee Ka Yin Chin,Kiam Beng Yeo,Kenneth Tze Kin Teo, “Vehivle trajectory clustering for traffic intersection surveillance”, IEEE
International Conference on Consumer Electronics-Asia, pp1-4,2016

[24]Guanglin Ma, Manoj Dwivedi, Ran Li, Chong Sun and Anton Kummert,” A Real-Time Rear View Camera Based Obstacle Detection”, IEEE Conference on Intelligent Transportation System,pp1-6,2009

[25]HadjHamma Tadjine,Markus Hess,Schulze Karsten “Object Detection and Classification Using a Rear In-Vehicle Fisheye Camera”, FISITA 2012 World Automotive Congress,pp519-528,2013

[26]Christophe Braillon,Cedric Pradalier, James L. Crowley,Christian Laugier, ” Real-time moving obstacle detection using optical flow models”, IEEE Intelligent
Vehicles Symposium,pp 466-471,2006

[27]Nobuyuki Otsu,” A threshold selection method from gray-level histograms”, IEEE Transactions on Systems, Man, and Cybernetics,pp 62-66,1979

[28]B.D. Lucas and T.Kanade,“an iterative image registration technique with an application to stereo vision”, IJCAI.1981

[29]Berthold K.P. Horn and Brian G. Rhunck, “determining optical flow”,Artificial Intelligence,17,pp. 185-203,1981

[30]Gunnar Farneback, ” two-frame motion estimation based on polynomialexpansion”,Lecture Notes in Computer Science,363-370,2003

[31]Stefan Atev, Osama Masoud, and Nikos Papanikolopoulos,” Learning Traffic Patterns at Intersections by Spectral Clustering of Motion Trajectories”, IEEE International Conference on Intelligent Robots and Systems, pp 4851-4856,2006

[32]Andrew Y. NG, Michael I. Jordan, Yair Weiss, “on spectral clustering: analysis and an algorithm”,Advance in Neural Information Processing Systems,pp 849-856,2001

[33]C.cortes,v.vapnik, “support vector networks”, Kluwer Academic Publishers, pp273-297,1995

[34]Yoav Freund,Robert E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting”, J. Comput. Syst.Sci.,vol. 55, no. 1,pp 119-139,1997
指導教授 鄭旭詠(Hsu-Yung Cheng) 審核日期 2017-7-20
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明