博碩士論文 105522113 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:49 、訪客IP:18.119.124.8
姓名 楊道偉(Dao-Wei Yang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於卷積神經網路之市區道路場景自適應車輛偵測機制
(An Adaptive Vehicle Detection Scheme for Urban Traffic Scenes based on Convolutional Neural Networks)
相關論文
★ 基於QT之跨平台無線心率分析系統實現★ 網路電話之額外訊息傳輸機制
★ 針對與運動比賽精彩畫面相關串場效果之偵測★ 植基於向量量化之視訊/影像內容驗證技術
★ 植基於串場效果偵測與內容分析之棒球比賽精華擷取系統★ 以視覺特徵擷取為基礎之影像視訊內容認證技術
★ 使用動態背景補償以偵測與追蹤移動監控畫面之前景物★ 應用於H.264/AVC視訊內容認證之適應式數位浮水印
★ 棒球比賽精華片段擷取分類系統★ 利用H.264/AVC特徵之多攝影機即時追蹤系統
★ 利用隱式型態模式之高速公路前車偵測機制★ 基於時間域與空間域特徵擷取之影片複製偵測機制
★ 結合數位浮水印與興趣區域位元率控制之車行視訊編碼★ 應用於數位智權管理之H.264/AVC視訊加解密暨數位浮水印機制
★ 基於文字與主播偵測之新聞視訊分析系統★ 植基於數位浮水印之H.264/AVC視訊內容驗證機制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來大量的攝影機被架設於市區路口以協助檢視各種交通狀況,若能善用這些畫面將有助於先進智慧型運輸系統(Intelligent Transportation System)的建置。本研究嘗試以開放式政府資料庫蒐集市區道路監控影像,提出場景適應式行駛車輛偵測機制。由於路口攝影機通常有著不同角度的畫面,而畫面中可能存在各式背景,例如建築物、路邊物、招牌與行道樹等,加上人與車輛在道路上可能發生相互遮蔽的情況,都讓單一離線偵測模型存在若干改進空間。本研究所提出的方法分為兩個階段;第一階段蒐集少量市區道路影像,利用Faster R-CNN訓練通用車輛偵測模型,並對目標場景進行車輛偵測與分類。第二階段則利用背景建立法產生車輛遮罩,搭配第一階段的通用模型偵測結果,經比對蒐集足量的單一種類車輛,並以時序方式貼在目標場景中,以幾乎自動的方式產生大量該場景標記資料。我們將這些標記資料再以Faster R-CNN訓練第二階段場景適應式模型,以此模型進行車輛偵測及後續可能的車流估計。實驗結果顯示所提出的方法能有效偵測與分類市區場景車輛,對於遮蔽車輛偵測也有不錯的表現。
摘要(英) A large number of digital cameras have been installed at intersections in urban areas to help monitor traffic conditions. Making better use of the scenes captured by these traffic surveillance cameras can facilitate the construction of advanced Intelligent Transportation Systems. This research aims at developing an adaptive vehicle detection scheme for urban traffic scenes, which collects roadside surveillance videos from publicly available sources. The proposed scheme consists of two main phases; the first phase is to collect a small number of traffic surveillance images for training a general model using Faster R-CNN. The second phase utilizes background subtraction to extract vehicle proposals. A sufficient number of vehicles are collected by comparing proposals with the results using the general model. The collected vehicles are superimposed on the constructed background in an appropriate order to achieve semi-automatic generation of training data with annotations. These training data are used to train a second-phase adaptive model. The experimental results show that the proposed scheme performs quite well and can handle vehicle occlusion problem.
關鍵字(中) ★ 市區道路影像
★ 自適應模型
★ 車輛偵測
★ 車輛識別
★ Faster R-CNN
★ 背景建立
關鍵字(英) ★ Urban Scenes
★ Adaptation Model
★ Vehicle Detection
★ Vehicle Recognition
★ Faster R-CNN
★ Background Subtraction
論文目次 論文摘要 i
Abstract ii
致謝 iii
目錄 iv
附圖目錄 vii
附表目錄 ix
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究貢獻 3
1.3 論文架構 4
第二章 相關研究 5
2.1 傳統車輛偵測 5
2.1.1 前景背景分割方法 5
2.1.2 驗證車輛 7
2.2 物件偵測 8
2.2.1 R-CNN 10
2.2.2 Fast R-CNN 11
2.2.3 Faster R-CNN 12
第三章 車輛偵測機制 14
3.1 系統概述 14
3.2 訓練通用模型 15
3.2.1 市區道路影像蒐集 16
3.2.2 標記資料介紹 17
3.2.3 Faster R-CNN訓練 18
3.3 物件提取 19
3.3.1 背景建立 20
3.3.2 遮罩分析 22
3.4 蒐集單一車輛 23
3.5 產生訓練資料 26
3.5.1 遮蔽車輛 28
3.5.2 多台車輛的順序 31
3.6 訓練自適應模型 34
第四章 實驗結果 35
4.1 開發環境與人工資料集介紹 35
4.2 訓練樣本蒐集 37
4.3 通用模型與自適應模型比較 42
4.4 效果評測mAP 44
4.4.1 名詞介紹 44
4.4.2 mAP 45
4.4.3 評測結果 47
4.4.4 額外比較結果 50
第五章 結論與未來展望 53
5.1 結論 53
5.2 未來展望 54
參考文獻 55
參考文獻 [1] S. Gupte, O. Masoud, R. F. K. Martin, and N. P. Papanikolopoulos, “Detection and classification of vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 3, no. 1, pp. 37-47, Mar. 2002.
[2] M. Cristani, M. Farenzena, D. Bloisi, and V. Murino, “Background subtraction for automated multisensor surveillance: A comprehensive review,” EURASIP J. Adv. Signal Process., vol. 2010, no. 1, Aug. 2010, Art. ID 343057.
[3] A. Lai and N. H. C. Yung, “A fast and accurate scoreboard algorithm for estimating stationary backgrounds in an image sequence,” in Proc. IEEE Int. Symp. Circuits Syst., May 1998, vol. 4, pp. 241-244.
[4] S. Messelodi, C. M. Modena and M. Zanin, “A computer vision system for the detection and classification of vehicles at urban road intersections,” Pattern Anal. Appl., vol. 8, no. 1/2, pp. 17-31, Sep. 2005.
[5] T. Gao, Z.-G. Liu, W.-C. Gao, and J. Zhang, “A robust technique for background subtraction in traffic video,” in Advances in Neuro-Information Processing. Berlin, Germany: Springer-Verlag, 2009, pp. 736-744.
[6] B. T. Morris and M. M. Trivedi, “Learning, modeling, and classification of vehicle track patterns from live video,” IEEE Trans. Intell. Transp Syst., vol. 9, no. 3, pp. 425-437, Sep. 2008.
[7] J. Zheng, Y. Wang, N. L. Nihan, and M. E. Hallenbeck, “Extracting roadway background image: Mode-based approach,” J. Transp. Res. Board, vol. 1944, no. 1, pp. 82-88, 2006.
[8] T. Bouwmans, F. El Baf, and B. Vachon, “Background modeling using mixture of Gaussians for foreground detection—A survey,” Recent Patents Comput. Sci., vol. 1, no. 3, pp. 219-237, Nov. 2008.
[9] L.-W. Tsai, J.-W. Hsieh, and K.-C. Fan, “Vehicle detection using normalized color and edge map,” IEEE Trans. Image Process., vol. 16, no. 3, pp. 850-864, 2007.
[10] G. Zhang, R. P. Avery, and Y. Wang, “Video-based vehicle detection and classification system for real-time traffic data collection using uncalibrated video cameras,” Transp. Res. Rec., J. Transp. Res. Board, vol. 1993, no. 1, pp. 138-147, 2007.
[11] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Detecting moving objects, ghosts, and shadows in video streams,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 1337-1342, Oct. 2003.
[12] A. Jazayeri, H. Cai, J. Y. Zheng, and M. Tuceryan, “Vehicle detection and tracking in car video based on motion model,” IEEE Trans. Intell. Transp. Syst., vol. 12, no. 2, pp. 583-595, Jun. 2011.
[13] N. Srinivasa, “Vision-based vehicle detection and tracking method for forward collision warning in automobiles,” in Proc. IEEE Intell. Veh. Symp., 2002, vol. 2, pp. 626-631.
[14] M. Bertozzi, A. Broggi, and S. Castelluccio, “A real-time oriented system for vehicle detection,” J. Syst. Archit., vol. 43, no. 1-5, pp. 317-325, Mar. 1997.
[15] N. Blanc, B. Steux, and T. Hinz, “LaRASideCam: A fast and robust vision-based blindspot detection system,” in Proc. IEEE Intell. Veh. Symp., 2007, pp. 480-485.
[16] Y.-M. Chan, S.-S. Huang, L.-C. Fu, and P.-Y. Hsiao, “Vehicle detection under various lighting conditions by incorporating particle filter,” in Proc. IEEE ITSC, 2007, pp. 534-539.
[17] X. Zhang, N. Zheng, Y. He, and F. Wang, “Vehicle detection using an extended hidden random field model,” in Proc. 14th IEEE ITSC, 2011, pp. 1555-1559.
[18] Z. Sun, G. Bebis, and R. Miller, “Monocular precrash vehicle detection: features and classifiers,” IEEE Trans. Image Process., vol. 15, no. 7, pp. 2019-2034, Jul. 2006.
[19] M. Cheon, W. Lee, C. Yoon, and M. Park, “Vision-based vehicle detection system with consideration of the detecting location,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 3, pp. 1243-1252, Sep. 2012.
[20] Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection using Gabor filters and support vector machines,” in Proc. 14th Int. Conf. DSP, 2002, vol. 2, pp. 1019-1022.
[21] W. Liu, X. Wen, B. Duan, H. Yuan, and N. Wang, “Rear vehicle detection and tracking for lane change assist,” in Proc. IEEE Intell. Veh. Symp., 2007, pp. 252-257.
[22] W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 34, no. 3, pp. 334-352, Aug. 2004.
[23] Berthold K. P. Horn , Brian G. Schunck, Determining optical flow, Artif. Intell., vol.17, pp.185-203, Aug. 1981
[24] V. Vapnik, The Nature of Statistical Learning Theory. New York, NY, USA: Springer-Verlag, 2000.
[25] T. Kohonen, “An introduction to neural computing,” Neural Netw., vol. 1, no. 1, pp. 3-16, 1988.
[26] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of online learning and an application to boosting,” in Computational Learning Theory. Berlin, Germany: Springer-Verlag, pp. 23-37, 1995.
[27] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 580-587, 2014.
[28] R. Girshick, “Fast R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis., pp. 1440-1448, 2015.
[29] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” in IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137-1149, Jun. 2017.
[30] J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeulders, “Selective Search for Object Recognition,” in Int. J. Comput. Vis., vol. 104, no. 2, pp. 154-171, 2013.
[31] Joyce Xu, “Deep Learning for Object Detection: A Comprehensive Review,” https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9
[32] Z. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,” in Proceedings of the 17th International Conference on Pattern Recognition, vol. 2, no. 2, pp. 28–31, 2004.
[33] C. L. Zitnick, P. Dollar, “Edge Boxes: Locating Object Proposals from Edges.” Comput. Vis. ECCV, vol. 8693, pp. 391-405, Sep. 2014.
[34] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the 22nd ACM international conference on Multimedia, pp. 675–678, ACM, 2014.
[35] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition”, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Jun. 2016.
[36] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211-252, Dec. 2015.
指導教授 蘇柏齊(Po-Chyi Su) 審核日期 2018-8-17
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明