博碩士論文 105553005 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:102 、訪客IP:3.147.65.168
姓名 賴秋燕(Chiu-Yen Lai)  查詢紙本館藏   畢業系所 通訊工程學系在職專班
論文名稱 偵測與辨識後方來車的嵌入式深度學習系統
(Rear-vehicle detection and recognition based on an embedded deep-learning system)
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來各國政府為降低交通事故的發生率,針對車輛安全制定相關法規,在汽車系統中以先進駕駛輔助系統 (Advanced Driver Assistance Systems, ADAS) 輔助駕駛提高行車安全。在機車方面,國內政府是在民國86年制定,機車騎士與後座人員需強制配戴安全帽,否則可處罰鍰;另外在108年後出廠的機車需加裝安全設備:防鎖死煞車系統 (ABS) 或連動式煞車系統 (CBS) ,以提高機車行車安全。在行車安全意識抬頭下,許多機車會另外加裝行車記錄器,但這只是被動的記錄系統,無法主動提供即時安全偵測與警示功能。
由於許多車禍原因來自於人為因素;例如,疲勞駕駛、路況不熟、超速、酒駕、後方來車追撞、大型車死角…等;因而本研究即針對後方來車提醒機車騎士需注意週遭車輛。行動載具必須具備輕量化的特性;因此,本研究的偵測與辨識系統使用嵌入式設備-樹莓派與NCS 2神經運算棒(Neural Computer Stick 2),加上輕量化的深度學習模型YOLOv3-Tiny並使用攝影機偵測機車後方的卡車、巴士、汽車、機車,在接近機車騎士前提醒機車騎士,為機車騎士添加一項主動安全設備,以達到即時偵測與辨識後方車輛的目的。
在本論文中我們以在不增加網路深度的架構,及在第 5層與第6層加入Res block模塊來進行特徵提取運算,以960×540解析度的影片進行測試及比較,測試相較於原始的YOLOv3-Tiny 的執行速度有稍微降低由124 fps降為104 fps,但mAP從93.63%提升為96.71%。
摘要(英) In recent years, in order to reduce the incidence of traffic accidents, governments of different countries have applied an Advanced Driver Assistance Systems (ADAS) to assist drivers and improve driving safety. In 1997, the Taiwan Government has set up a series of safety rules for motorcyclists, passengers and manufacturers. Motorcyclist and rear seat passenger must wear safety helmets when driving, or penalty will be charged. Motorcycles manufactured on or after 2019, must buddle with safety equipment, e.g. anti-lock braking system (ABS) or interlocking brake system (CBS) to improve driving safety. In the sake of safety, many motorcycles have additionally installed a driving recorder, however, which is a passive operation system and unable to provide an active real time detection and warning alert.
Since many car accidents are caused by human factors, e.g. drowsy driving, unfamiliar road conditions, speeding, drunk driving, overtaking, blind corners etc.. Therefore, this research is aimed at alerting motorcyclists from vehicles coming from the rear and surroundings. To reduce the weight of the appliance, the detection and identification system uses embedded devices-Raspberry Pi and NCS 2 Neural Computer Stick 2 (Neural Computer Stick 2), plus a lightweight deep learning model YOLOv3-Tiny. In order to provide an active real time detection and safety driving environment, cameras are installed at the rear of vehicles to detect trucks, buses, cars, and motorcycles, warning will be given to motorcyclists once vehicles are approaching.
Here we used an architecture that does not increase the depth of the network, by adding a Res block module to the 5th and 6th layers to perform feature extraction operations, and using a 960×540 resolution video to bring out the test and comparison. The execution speed of the traditional YOLOv3-Tiny is slightly reduced from 124 fps to 104 fps, mAP is increased from 93.63% to 96.71%.
關鍵字(中) ★ 先進駕駛輔助系統
★  樹莓派
★  神經運算棒
關鍵字(英) ★ Advanced Driver Assistance Systems
★  Raspberry Pi
★  Neural Computer Stick 2
論文目次 摘要 i
Abstract iv
致謝 vi
目錄 vii
圖目錄 ix
表目錄 xii
第一章 緒論 1
1.1 研究動機 1
1.2 系統架構 4
1.3 論文特色 5
1.4 論文架構 6
第二章 相關研究 7
2.1 卷積神經網路介紹 7
2.2 相關卷積神經網路物件偵測系統發展 11
2.3 輕量化卷積神經網路物件偵測 15
第三章 物件偵測與辨識 21
3.1 二階段偵測網路 (two-stage detection) 21
3.2 一階段偵測網路 (one-stage detection) 22
3.3 YOLO系列發展 23
第四章 實驗與結果 39
4.1 實驗設備 39
4.2 卷積神經網路之訓練架構 42
4.3 移動物件車輛之實驗與評估 45
4.4 OpenVINO介紹 54
4.5 導入嵌入式設備系統與結果展示 60
第五章 結論及未來展望 64
參考文獻 65
參考文獻 [1] 交通部統計查詢網,「機動車輛登記數」,2020年3月。
[2] 交通部統計調查網,「道路交通事故(30日內)-按第一當事者駕乘車種分」,2020年5月。
[3] 警政署統計處,「108 年警察機關受(處)理 A1 及 A2 類道路交通事故概況」,2020年5月20日。
[4] 施聰評; 林信賢, "先進駕駛輔助系統(ADAS) 法規趨勢 - 財團法人車輛研究測試中心," [Online]. Available:https://www.artc.org.tw/upfiles/ADUpload/knowledge/tw_knowledge_499017376.pdf.
[5] N Bodla, B Singh and R Chellappaet al, "Soft-NMS: improving object detection with one line of code," in Proc. IEEE Int. Conf. on Computer Vision (ICCV), Venice, Italy, Oct. 22-29, 2017.
[6] S. Albelwi and A. Mahmood, "A framework for designing the architectures of deep convolutional neural networks, " Entropy, vol.19, no.6, p.5, 2017.
[7] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol.86, pp.2278-2324, 1998.
[8] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Columbus, Ohio, Jun.23-28, 2014, pp.580-587.
[9] J. Uijlings, K. Sande, T. Gevers, and A. Smeulders, “Selective search for object recognition,”Int. Journal of Computer Vision (IJCV), vol.104, is.2, pp.154-171, 2013.
[10] R. Girshick, "Fast R-CNN," in Proc. of IEEE Int. Conf. on Computer Vision (ICCV), Santiago, Chile, Dec.11-18, 2015, pp.1440-1448.
[11] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in Proc. of ECCV Conf. , Zurich, Switzerland, Sep.6-12, 2014, pp.346-361.
[12] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.39, is.6, pp.1137-1149, 2016.
[13] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask R-CNN," in Proc. of IEEE Int. Conf. on Computer Vision (ICCV), Venice, Italy, Oct.22-29, 2017, pp. 2980-2988.
[14] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in European Conf. on Computer Vision (ECCV), Amsterdam, Holland, Oct.8-16, 2016, pp.21-37.
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Neural Information Processing Systems (NIPS), Lake Tahoe, Nevada, Dec.3-8, 2012, pp.1097-1105.
[16] M. Lin, Q. Chen, and S. Yan, “Netwok in network,” in Proc. Int. Conf. Learn. Represent (ICLR), Banff, Canada, Apr.14-16, 2014, pp.274-278.
[17] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Jun.7-12, 2015, pp.1-9.
[18] N. Iandola, S. Han, W. Moskewicz, K. Ashraf, W. Dally and K. Keutzer, ′′Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 1mb model size,′′ arXiv: 1602.07360.
[19] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, ′′ Mobilenets: efficient convolutional neural networks for mobile vision applications,′′ arXiv:1704.04861.
[20] F. Chollet, ′′Xception: deep learning with depthwise deparable convolutions,′′ in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, Jul.22-25, 2017, pp.1800-1807.
[21] X. Zhang, X. Zhou, M. Lin, and J. Sun, ′′ShuffleNet: an extremely efficient convolutional neural network for mobile devices,′′ arXiv:1707.01083.
[22] G. Huang, Z. Liu, L. V. D. Maaten and K. Q. Weinberger, ′′Densely Connected Convolutional Networks,′′ in Proc. IEEE Conf. on Pattern Recognition and Computer Vision (CVPR), Honolulu, Hawaii, Jul.22-25, 2017, pp.4700-4708.
[23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: unified, real-time object detection," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp.779-788.
[24] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, Jul.21-26, 2017, pp.6517-6525.
[25] J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proc. 5th Berkeley Symp. on Mathematical Statistics and Probability, Berkeley, CA, Jun.21-Jul.18, vol.1, 1967, pp.281-297.
[26] J. Redmon and A. Farhadi, “YOLO v3: an incremental improvement, ” arXiv:1804.02767.
[27] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, ′′ Feature pyramid networks for object detection,′′ arXiv:1612.03144, 2017.
[28] K. He, X. Zhang, S. Ren, and J. Sun, ′′Deep residual learning for image Recognition,′′ arXiv: 1512.03385.
[29] 維基百科,”樹莓派” Raspberry。
[30] Raspberry Pi 官方代理商,Raspberry Pi 台灣樹莓派。
[31] OpenVINOTM Toolkit,「Optimization Guide」。
指導教授 陳彥文 曾定章(Yen-Wen Chen Din-Chang Tseng) 審核日期 2020-7-29
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明