博碩士論文 104521051 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:24 、訪客IP:3.143.9.115
姓名 黃冠穎(Kuan-Ying Huang)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於深度學習之贓車偵測系統
(Vehicle detection system based on deep learning)
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文提出一套基於深度學習之贓車偵測系統,透過深度學習技
術來實現車款、車色與車牌號碼辨識,再將這些資訊與車籍資料庫比
對,進而完成車籍身分檢測。此系統中採用兩個即時物件偵測網路,
分別為 YOLOv2 與 Tiny YOLO 網路架構,前者網路稱為車種偵測網
路,可以針對車頭外觀識別其車款與車色,即便車頭有角度偏轉仍能
精確判別,本研究讓該網路學習台灣常見的 100 種車款與 11 種車色;
後者網路稱為字元偵測網路,該網路可直接對車牌影像的進行字元辨
識,不同於傳統車牌辨識方法需對車牌進行定位、轉正與字元切割,
此方法更簡潔且強健,不但能識別歪斜、模糊與光線不佳的台灣車牌
號碼,也適用於其他規格的非台灣車牌,像是具有複雜背景圖案的美
國車牌亦能辨識。另外,考量日後新增車款種類之需求,本論文為車
種偵測網路設計一套訓練流程,方便使用者未來擴增網路的學習類別。
本論文為使用者設計兩種使用模式,分別為手機 App 功能與即
時影像分析功能:手機 App 功能讓使用者透過手機相機對戶外車輛
拍照,影像傳至伺服器運算並回傳分析結果,使用者立即得知該車是
否為問題車輛;即時影像分析模式則能用於街道監測,統計監視器中
出現過的車輛,並與資料庫比對來檢查是否為問題車輛。本論文提出
一套完整且強健的贓車偵測系統,由實驗結果可見此系統能於監視器
錄影中同時偵測出多台車輛的車款、車色與車牌號碼,且在不同角度
與光源下仍能使用。
摘要(英) This thesis presents a vehicle detection system with deep learning. We use two detectors based on deep learning, vehicle type detector and plate number detector. The former is customized for model and color classification, and the latter is for License Plate Recognition (LPR). The vehicle type detector is able to predict 100 models and 11 colors in Taiwan, and it takes a whole image as input without cropping car regions, which considerably different from most of the current vehicle type classification methods using cropped car regions as input. In addition, traditional approaches to solve LPR problem typically are broken down into the localization, segmentation, and recognition steps. Rather than doing those preprocess steps, the plate number detector we proposed can operate directly on plate images with high performance in angularly skewed, various light, and low resolution condition. Considering the need for adding new classes for vehicle type detector in the future, we design an auto-labeling flow to automatically create bounding box labels for training. After getting the information of color, model, and plate number, we can search the plate number in the database of registered vehicle to confirm whether information is consistent. In this thesis, we develop two user interfaces (UI) for mobile device and street monitoring respectively. The user can know whether the car is stolen vehicle immediately by photographing it with smartphone camera. Additionally, our system can also achieve real-time video analysis for street monitoring. Notably, from the experimental results, our method is allowed to simultaneously detect all vehicles at one frame, even in skew angle.
關鍵字(中) ★ 深度學習
★ 贓車偵測
★ 車款辨識
★ 車色辨識
★ 車牌辨識
關鍵字(英) ★ deep learning
★ stolen vehicle detection
★ car model classification
★ car color classification
★ license plate recognition
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 ix
第一章 緒論 1
1.1 研究背景與動機 1
1.2 文獻回顧 1
1.3 論文目標 4
1.4 論文架構 4
第二章 系統架構與軟硬體介紹 5
2.1 系統架構 5
2.2 硬體介紹 5
2.3 軟體介紹 7
第三章 主要方法與演算法 8
3.1 物件偵測網路介紹 8
3.2 YOLOv2網路介紹 12
3.2.1 偵測方法 13
3.2.2 網路架構 17
3.2.3 Multi-Scale訓練方法 20
3.3 車種偵測網路介紹 20
3.3.1 車款物件 20
3.3.2 車色物件 21
3.3.3 網路架構 21
3.4 字元偵測網路介紹 22
3.4.1 字元物件 22
3.4.2 網路架構 23
3.5 車款、車色與車牌號碼辨識技術 24
3.6 贓車偵測演算法 26
3.6.1 影響辨識之特殊情況 26
3.6.2 贓車偵測演算法 26
3.6.3 利用追蹤演算法輔助串流影像之辨識 28
第四章 訓練資料與流程 29
4.1 車種偵測網路 29
4.1.1 訓練資料 29
4.1.2 Auto-labeling流程 34
4.2 字元偵測網路 36
4.2.1 訓練資料 36
第五章 實驗結果 38
5.1 車種偵測網路測試 38
5.1.1 車牌與車身偵測網路訓練過程與結果 38
5.1.2 車牌與車色偵測網路訓練過程與結果 39
5.1.3 車種偵測網路訓練過程與結果 40
5.1.4 車種辨識網路性能測試 41
5.2 字元偵測網路測試 43
5.2.1 訓練過程 43
5.2.2 性能測試 44
5.3 車款、車色與車牌號碼辨識技術測試 45
5.4 贓車偵測系統功能測試 45
5.4.1 手機App功能測試 45
5.4.2 即時影像分析功能測試 46
第六章 結論與未來展望 50
6.1 結論 50
6.2 未來展望 51
參考文獻 52
參考文獻
[1]M. A. Mustafa, M. Behnam, and M. El-Tarhuni, ”A wireless embedded system for the tracking of stolen vehicles,” in Computer Systems and Applications, 2006. IEEE International Conference on., 2006, pp. 818-825.
[2]R. Sivakumar, G. Vignesh, V. Narayanan, S. Prakash, and V. Sivakumar, ”Automated traffic light control system and stolen vehicle detection,” in Recent Trends in Electronics, Information & Communication Technology (RTEICT), IEEE International Conference on, 2016, pp. 1594-1597.
[3]Y. C. Shih, Y. M. Liang, and S. W. Chen, ”Demo paper: Smartphone-based automatic stolen vehicle detection system,” in Multimedia and Expo Workshops (ICMEW), 2013 IEEE International Conference on, 2013, pp. 1-2.
[4]R. Al-Hmouz and S. Challa, ”Intelligent stolen vehicle detection using video sensing,” in Information, Decision and Control, 2007. IDC′07, 2007, pp. 302-307.
[5]H. Yang et al., ”An efficient method for vehicle model identification via logo recognition,” in Computational and Information Sciences (ICCIS), 2013 Fifth International Conference on, 2013, pp. 1080-1083.
[6]D. F. Llorca, D. Colás, I. G. Daza, I. Parra, and M. Sotelo, ”Vehicle model recognition using geometry and appearance of car emblems from rear view images,” in Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on, 2014, pp. 3094-3099.
[7]Z. Ge, A. Bewley, C. McCool, P. Corke, B. Upcroft, and C. Sanderson, ”Fine-grained classification via mixture of deep convolutional neural networks,” in Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, 2016, pp. 1-6.
[8]A. Krizhevsky, I. Sutskever, and G. E. Hinton, ”Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097-1105.
[9]C. Szegedy et al., ”Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
[10]J. Krause, T. Gebru, J. Deng, L. J. Li, and F. F. Li, ”Learning features and parts for fine-grained recognition,” in Pattern Recognition (ICPR), 2014 22nd International Conference on, 2014, pp. 26-33.
[11]J. Fang, Y. Zhou, Y. Yu, and S. Du, ”Fine-grained vehicle model recognition using a coarse-to-fine convolutional neural network architecture,” in IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 7, pp. 1782-1792, 2017.
[12]R. Girshick, J. Donahue, T. Darrell, and J. Malik, ”Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[13]J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, ”Selective search for object recognition,” in International journal of computer vision, vol. 104, no. 2, pp. 154-171, 2013.
[14]R. Girshick, ”Fast R-CNN,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440-1448.
[15]S. Ren, K. He, R. Girshick, and J. Sun, ”Faster R-CNN: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91-99.
[16]K. He, X. Zhang, S. Ren, and J. Sun, ”Spatial pyramid pooling in deep convolutional networks for visual recognition,” in European Conference on Computer Vision, 2014, pp. 346-361.
[17]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, ”You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779-788.
[18]J. Redmon and A. Farhadi, ”YOLO9000: better, faster, stronger,” arXiv preprint arXiv:1612.08242, 2016.
[19]W. Liu et al., ”SSD: Single shot multibox detector,” in European conference on computer vision, 2016, pp. 21-37.
[20]S. Ioffe and C. Szegedy, ”Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning, 2015, pp. 448-456.
[21]H. Guanglin and G. Yali, ”A simple and fast method of recognizing license plate number,” in Information Technology and Applications (IFITA), 2010 International Forum on, 2010, vol. 2, pp. 23-26.
[22]蘇浩平(鍾聖倫與徐繼聖教授指導),「開放環境下之車牌偵測」,國立台灣科技大學電機研究所碩士論文,2017年1月。
[23]M. Opitz, M. Diem, S. Fiel, F. Kleber, and R. Sablatnig, ”End-to-end text recognition using local ternary patterns, MSER and deep convolutional nets,” in Document Analysis Systems (DAS), 2014 11th IAPR International Workshop on, 2014, pp. 186-190.
[24]F. Stark, C. Hazırbas, R. Triebel, and D. Cremers, ”Captcha recognition with active deep learning,” in GCPR Workshop on New Challenges in Neural Computation, 2015.
[25]I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet, ”Multi-digit number recognition from street view imagery using deep convolutional neural networks,” arXiv preprint arXiv:1312.6082, 2013.
[26]Darknet深度學習開源函式庫,https://pjreddie.com/darknet/,2017年1月。
[27]A. Krizhevsky, I. Sutskever, and G. E. Hinton, ”Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097-1105.
[28]M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, ”The pascal visual object classes challenge ,” in International journal of computer vision,vol. 88, no. 2, pp. 303-338, 2010.
指導教授 王文俊(Wen-June Wang) 審核日期 2017-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明