博碩士論文 110521159 完整後設資料紀錄

DC 欄位 語言
DC.contributor電機工程學系zh_TW
DC.creator廖首名zh_TW
DC.creatorShou-Ming Liaoen_US
dc.date.accessioned2023-8-8T07:39:07Z
dc.date.available2023-8-8T07:39:07Z
dc.date.issued2023
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=110521159
dc.contributor.department電機工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract此次的研究是以深度學習神經網路來進行車牌定位與字元辨識,本研究主要是使用YOLOv7 (You Only Look Once v7)進行訓練並辨識目標物件,相較之前的最突出的實時物件偵測技術,像是YOLOv5 (You Only Look Once v5)與高效目標檢測模型 (EfficientDet)等物件偵測模型,降低了40% 參數量、50%運算量,並有更快的速度與正確率,因此決定採用YOLOv7。鑒於傳統車牌辨識方法需要經過二值化與侵蝕篩選等等,或辨識環境受限固定角度、光源與位置,所以本研究目標是利用超解析模型、低光源演算法搭配 YOLOv7,完成多物件辨識且不受限多種環境進行快速辨識。 本次實驗包含各式車輛且遠近不同的街景圖,總計車牌訓練資料 5524張,字元訓練資料 4676 張,測試資料300張包含各式複雜環境(遠景、歪斜、模糊、昏暗等等),車牌定位正確率可以達到98.5%,三公尺內合理拍攝則超過99%,且召回率為98.1%,F1-Score為98.3%。而字元辨識正確率達99.3%,召回率為98.6%,F1-Score為98.95%。其中因YOLOv7的深度學習框架使用Pytorch,但因為未來需要用在行動裝置,因此需要更輕量化以減小中央處理器 (Central Processing Unit, CPU)負擔,因此需要經過多種深度學習框架的轉換,從開放式交換神經網路 (Open Neural Network Exchange, ONNX)、張量流(Tensorflow, TF)、輕量化張量流 (Tensorflow Lite, TFLite)的轉換。最後利用Andorid Studio實作 APP 程式,並於 Android 行動工業電腦上完成三公尺內合理拍攝車牌的99%正確率與即時偵測,以便未來在收費員與警察查緝之利用。 zh_TW
dc.description.abstractThis study focuses on utilizing deep learning neural networks to perform license plate detection and character recognition. The primary approach employed in this research is the training and utilization of YOLOv7 (You Only Look Once V7) for object detection. Compared to previous prominent real-time object detection techniques like YOLOv5 and EfficientDet, YOLOv7 offers a remarkable reduction of 40% in parameter count and 50% in computational load while maintaining faster inference speeds and improved precision. Consequently, YOLOv7 was chosen as the preferred model for this study.Given that conventional license plate recognition methods often involve preprocessing steps such as binarization, erosion filtering, and are limited by fixed angles, lighting conditions, and positions, this research aims to address these limitations by leveraging super-resolution models and low-light algorithms in conjunction with YOLOv7. The objective is to achieve fast and robust license plate recognition across diverse environmental conditions. The experimental dataset comprises a variety of street scenes featuring different vehicles at varying distances. In total, there are 5,524 images for license plate training and 4,676 images for character training. The test dataset consists of 300 images that encompass various complex environments such as distant views, tilting, blurriness, dim lighting, and more. The accuracy of license plate localization reaches 98.5%, while for reasonably captured plates within a three-meter range, it exceeds 99%. The character recognition accuracy is 99.3%. Additionally, the recall for license plate localization is 98.1%, and the F1-Score is 98.3%. For character recognition, the recall is 98.6%, and the F1-Score is 98.95%. As YOLOv7 utilizes the PyTorch deep learning framework, efforts were made to ensure compatibility with mobile devices, necessitating model lightweighting to alleviate the computational burden on Central Processing Unit (CPU). Consequently, the models underwent conversion across multiple deep learning frameworks, including Open Neural Network Exchange (ONNX), TensorFlow (TF), and TensorFlow Lite (TFLite).Finally, an Android Studio application was developed to deploy the system, achieving a 99% accuracy in license plate recognition for license plates reasonably captured within a three-meter range, and enabling real-time detection on an Android mobile industrial computer.en_US
DC.subject深度學習zh_TW
DC.subject車牌辨識zh_TW
DC.subjectAndroid mobile deviceen_US
DC.subjectdeep learningen_US
DC.title車牌辨識應用深度學習於Android 行動裝置zh_TW
dc.language.isozh-TWzh-TW
DC.titleApplication of deep learning in license plate recognition to Android mobile devicesen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明