博碩士論文 105522055 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:103 、訪客IP:3.15.202.186
姓名 謝鈞惟(Chun-Wei Hsieh)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 縮小更快速區域卷積神經網路的適應性前車偵測與辨識
(Weather-adapted vehicle detection and recognition using a squeezed faster region convolutional neural network)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 根據統計,大部分的交通事故都是駕駛因為沒有注意而與量發生碰撞,因此先進駕駛輔助系統 (advanced driver assistance systems , ADAS) 已經變成近年熱門的研究議題。在本論文中,我們提出一個可以適應天候的前車碰撞警示系統,此系統可以幫助駕駛者偵測前方車輛,然而車輛辨識的準確度常常受到許多因素影響,最主要是天氣因素影響天候 (例如,白天、夜晚、晨曦、向陽、斜陽、黃昏、雨天、薄霧、濃霧、陰天),因此我們用卷積神經網路訓練出可適應各種天候環境的前車偵測系統,在駕駛可能發生危險的情況下提醒,避免意外發生。

本論文分為三個部分:第一部分為改進更快速區域卷積神經網路。原本更快速區域卷積神經網路利用VGG16提取特徵,由於VGG16網路較為龐大,需佔用較多的硬體資源,我們以SqueezeNet架構改進原本VGG16網路,以達到減少網路大小及增進速度的效果。第二部分是將更快速區域卷積神經網路中原本的ROI池化層以ROI校準層取代,以改善偵測結果。第三部分是由系統所偵測出來的車輛框,透過相機的焦距與相機安裝於車上的高度和影像中車輛底部的位置透過相似三角形,計算出與前車的距離。
在實驗中,我們以行車記錄器的影片進行測試,影片中包含各種天候,物件偵測系統的mAP可達0.907,在 640×480 解析度的影片測試平均速度為每秒 30 張影像,參數量大小為7.7 M,原本更快速區域卷積神經網路使用ROI池化層的偵測框準確度為79.25%,而我們系統使用了ROI校準層取代了ROI池化層,偵測框準確度達到90.4%,提高了大約10%。
摘要(英) In recent years, machine learning has flourished, such as face recognition, speech recognition, object detection, etc. However, in terms of object detection, it is closely related to people′s daily life. In recent years, automatic cars have been gradually emphasized, and the safety of automatic cars is also a very important issue. Maintaining an appropriate distance from the front car to avoid collision with the preceding car is a key point of automatic cars’ practice. However, the accuracy of vehicle identification is often affected by many factors, the most influential factors are weather conditions, such as, day, night, morning, sun, rain, mist, fog, cloudy. Therefore, we use a convolutional neural network to implement a front-vehicle detection system that can adapt to various weather conditions and remind the driver to avoid accidents.

There are three parts in this paper, the first part is based on Faster-RCNN and inproving it. Faster-RCNN uses VGG16 for extraction features. VGG16 network is relatively large, it needs to occupy more resources, so we use SqueezeNet architecture to improve VGG16 architecture to reduce network size and speed. The second part is replaced ROI pooling layer by ROI align layer to improve the detection results; the third part is calculating the distance from the front car, We use the focal length of the camera and the height of the camera mounted on the car to calculate the distance from the front car.
In the experiment, we use 697 images as training and 233 images as test data. The film contains various weather conditions. The mAP of the object detection system can reach 0.907, and the average test speed of a 640×480 resolution film is 30 frames per second. The number of parameters size is about 7.7 M.
關鍵字(中) ★ 縮小卷積 關鍵字(英) ★ squeeze cnn
論文目次 摘要 ii
Abstract iii
致謝 iv
目錄 v
圖目錄 vii
表目錄 ix
第一章 緒論 1
1.1 研究動機 1
1.2 系統架構 2
1.3 論文特色 4
1.4 論文架構 4
第二章 相關研究 6
2.1 車輛偵測 6
2.2 卷積神經網路物件偵測系統相關發展 10
2.3 卷積神經網路的小型化 13
第三章 縮小更快速區域卷積神經網路 19
3.1 更快速區域卷積神經網路架構 19
3.2 縮小更快速區域卷積神經網路 24
3.3 SqueezeNet網路架構 25
3.4 ROI較準層 30
第四章 計算前車距離與碰撞時間 34
4.1 計算前車距離 34
4.2 碰撞時間 (time to collision, TTC) 37
第五章 實驗與結果 38
5.1 實驗設備介紹 38
5.2 縮小更快速區域卷積神經網路實驗訓練 38
5.3 車輛偵測相關實驗 42
5.4 縮小更快速區域卷積神經網路偵測結果展示 49
第六章 結論及未來展望 54
參考文獻 56
?
參考文獻 [1] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.39, is.6, pp.1137-1149, 2016.
[2] N. Iandola, S. Han, W. Moskewicz, K. Ashraf, W. Dally, and K. Keutzer, ′′Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 1mb model size,′′ arXiv preprint arXiv: 1602.07360, 2016.
[3] J. Armingol, A. Escalera, C. Hilario, J. M. Collado, J. P. Carrasco, M. J. Flores, J. M. Pastor, and F. J. Rodriguez, "IVVI: Intelligent vehicle based on visual information," Robotics and Autonomous Systems, vol.55, no.12, pp.904-916, 2007.
[4] J. Chu, L. Ji, L. Guo, Libibing, and R. Wang, "Study on method of detecting preceding vehicle based on monocular camera," in Proc. IEEE Intelligent Vehicles Symp., Parma, Italy, Jun.14-17, 2004, pp.750-755.
[5] S.K. George, N.H.C. Yung, and G.K.H. Pang, "Vehicle shape approximation from motion for visual traffic surveillance" in Proc. IEEE Conf. Intelligent Transportation Systems, Oakland, CA, 2001, pp.610-615.
[6] B. Shyr, Daytime Detection of Leading and Neighboring Vehicles on Highway: A Major Capability for the Driver Assistant Vision System, Master Thesis, Electrical Engineering, National Chung Cheng Univ., Chia-yi, Taiwan, 2003.
[7] M. Ansari, S. Mousset, and A. Bensrhair, "Temporal consistent real-time stereo for intelligent vehicles," Pattern Recognition Letters, vol.31, no.11, pp.1226-1238, 2010.
[8] V. Milanes, D. F. Llorca, J. Villagra, J. Perez, C. Fernandez, I. Parra, C. Gonzalez, and M. A. Sotelo, "Intelligent automatic overtaking system using vision for vehicle detection," Expert Systems with Applications, vol.39, no.3, pp.3362-3373, 2012.
[9] S. Sivaraman, and M. M. Trivedi, "Combining monocular and stereo-vision for real-time vehicle ranging and tracking on multilane highways," in Proc. IEEE Conf. on Intelligent Transportation Systems, Washington DC, Oct.5-7, 2011, pp.1249-1254.
[10] T. Surgailis, A. Valinevicius, V. Markevicius, D. Navikas, and D. Andriukaitis, "Avoiding forward car collision using stereo vision system," Elektronika ir Elektrotechnika, vol.18, no.8, pp.37-40, 2012.
[11] J. Cui, F. Liu, Z. Li, and Z. Jia, "Vehicle localisation using a single camera," in Proc. IEEE Symp. Intelligent Vehicles, San Diego, CA, Jun. 21-24, 2010, pp.871-876.
[12] A. Broggi, P. Cerri, and P. C. Antonello, "Multi-resolution vehicle detection using artificial vision," in Proc. Conf. Intelligent Vehicles Symp., Parma, Italy, Jun.14-17, 2004, pp.310-314.
[13] A. Jazayeri, C. Hongyuan, Z. Jiang Yu, and M. Tuceryan, "Vehicle detection and tracking in car video based on motion model," IEEE Trans. Intelligent Transportation Systems, vol.12, no.2, pp.583-595, 2011.
[14] S. Teoh, and T. Braunl, "Symmetry-based monocular vehicle detection system," Machine Vision and Applications, vol.23, no.5, pp.831-842, 2012.
[15] G. Toulminet, S. Mousset, and A. Bensrhair, "Fast and accurate stereo vision-based estimation of 3D position and axial motion of road obstacles, "Int. Journal of Image and Graphics, vol.4, no.1, pp. 99-126, 2004.

[16] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Columbus, Ohio, Jun.23-28, 2014, pp.580-587.
[17] J. Uijlings, K. Sande, T. Gevers, and A. Smeulders, “Selective search for object recognition,” Int. Journal of Computer Vision (IJCV), vol.104, is.2, pp.154-171, 2013.
[18] L. Andreone, F. Bellotti, A. D. Gloria, and R. Lauletta, ′′SVM-based pedestrian recognition on near-infrared images,′′ in Proc. 4th IEEE Int. Symp. on Image and Signal Processing and Analysis, Torino, Italy, Sep.15-17, 2005, pp.274-278.
[19] R. Girshick, "Fast R-CNN," in Proc. of IEEE Int. Conf. on Computer Vision (ICCV), Santiago, Chile, Dec.11-18, 2015, pp.1440-1448.
[20] K. He, X. Zhang, S. Ren, and J. Sun, ′′ Spatial pyramid pooling in deep convolutional networks for visual recognition ,′′IEEE Trans. Pattern Analysis and Machine Intelligence, vol.37, is.9, pp.1904-1916, 2015.
[21] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You only look once: unified, real-Time object detection," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp.779-788.
[22] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “ SSD: Single shot multibox detector,” in European Conf. on Computer Vision (ECCV), Amsterdam, Holland, Oct.8-16, 2016, pp.21-37.


[23] S. Han, H. Mao, and W. J. Dally, “Deep compression:compressing deep neural networks with pruning trained quantization and huffman coding,” in Proc. Int. Conf. Learn. Represent (ICLR), San Juan, Puerto Rico, May.2-4, 2016, pp.1-14.
[24] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “Xnornet: imagenet classification using binary convolutional neural networks,” in European Conf. on Computer Vision (ECCV), Amsterdam, Netherlands, Oct.11-14, 2016, pp.525-542.
[25] M. Lin, Q. Chen, and S. Yan, “Netwok in network,” in Proc. Int. Conf. Learn. Represent (ICLR), Banff, Canada, Apr.14-16, 2014, pp.274-278.
[26] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, ′′ Mobilenets: efficient convolutional neural networks for mobile vision applications,′′ arXiv preprint arXiv:1704.04861, 2017.
[27] X. Zhang, X. Zhou, M. Lin, and J. Sun, ′′ ShuffleNet: an extremely efficient convolutional neural network for mobile devices,′′ arXiv preprint arXiv:1707.01083, 2017.
[28] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Jun.7-12, 2015, pp.1-9.
[29] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” in Proc. of Conf. on Neural Information Processing Systems (NIPS), Advances in neural information processing systems, Montreal, Canada, Dec.12, 2014, pp.1-9.


[30] F. Chollet, ′′ Xception: deep learning with depthwise deparable convolutions,′′ in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Jul.22-25, 2017, pp. 1800-1807.
[31] K. He, G. Gkioxari, P. Dollar and R. Girshick, "Mask R-CNN," in Proc. of IEEE Int. Conf. on Computer Vision (ICCV), Venice, Italy, Oct.22-29, 2017, pp. 2980-2988.
[32] D.-C. Tseng, Monocular Computer Vision Aided Road Vehicle Driving for Safety, U.S. Patent No. 6765480, 2004.
[33] T. Zielke, M. Brauckmann, and W. Von Seelen, "Intensity and edge-based symmetry detection with an application to car-following," CVGIP: Image Understanding, vol.56, no.2, pp.177-190, 1993.
指導教授 曾定章(Din-Chang Tseng) 審核日期 2018-7-31
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明