中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/92824
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78852/78852 (100%)
Visitors : 38267474      Online Users : 588
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/92824


    Title: 車牌辨識應用深度學習於Android 行動裝置;Application of deep learning in license plate recognition to Android mobile devices
    Authors: 廖首名;Liao, Shou-Ming
    Contributors: 電機工程學系
    Keywords: 深度學習;車牌辨識;Android mobile device;deep learning
    Date: 2023-08-08
    Issue Date: 2023-10-04 16:11:27 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 此次的研究是以深度學習神經網路來進行車牌定位與字元辨識,本研究主要是使用YOLOv7 (You Only Look Once v7)進行訓練並辨識目標物件,相較之前的最突出的實時物件偵測技術,像是YOLOv5 (You Only Look Once v5)與高效目標檢測模型 (EfficientDet)等物件偵測模型,降低了40% 參數量、50%運算量,並有更快的速度與正確率,因此決定採用YOLOv7。鑒於傳統車牌辨識方法需要經過二值化與侵蝕篩選等等,或辨識環境受限固定角度、光源與位置,所以本研究目標是利用超解析模型、低光源演算法搭配 YOLOv7,完成多物件辨識且不受限多種環境進行快速辨識。
    本次實驗包含各式車輛且遠近不同的街景圖,總計車牌訓練資料 5524張,字元訓練資料 4676 張,測試資料300張包含各式複雜環境(遠景、歪斜、模糊、昏暗等等),車牌定位正確率可以達到98.5%,三公尺內合理拍攝則超過99%,且召回率為98.1%,F1-Score為98.3%。而字元辨識正確率達99.3%,召回率為98.6%,F1-Score為98.95%。其中因YOLOv7的深度學習框架使用Pytorch,但因為未來需要用在行動裝置,因此需要更輕量化以減小中央處理器 (Central Processing Unit, CPU)負擔,因此需要經過多種深度學習框架的轉換,從開放式交換神經網路 (Open Neural Network Exchange, ONNX)、張量流(Tensorflow, TF)、輕量化張量流 (Tensorflow Lite, TFLite)的轉換。最後利用Andorid Studio實作 APP 程式,並於 Android 行動工業電腦上完成三公尺內合理拍攝車牌的99%正確率與即時偵測,以便未來在收費員與警察查緝之利用。 
    ;This study focuses on utilizing deep learning neural networks to perform license plate detection and character recognition. The primary approach employed in this research is the training and utilization of YOLOv7 (You Only Look Once V7) for object detection. Compared to previous prominent real-time object detection techniques like YOLOv5 and EfficientDet, YOLOv7 offers a remarkable reduction of 40% in parameter count and 50% in computational load while maintaining faster inference speeds and improved precision.
    Consequently, YOLOv7 was chosen as the preferred model for this study.Given that conventional license plate recognition methods often involve preprocessing steps such as binarization, erosion filtering, and are limited by fixed angles, lighting conditions, and positions, this research aims to address these limitations by leveraging super-resolution models and low-light algorithms in conjunction with YOLOv7. The objective is to achieve fast and robust license plate recognition across diverse environmental conditions. The experimental dataset comprises a variety of street scenes featuring different vehicles at varying distances. In total, there are 5,524 images for license plate training and 4,676 images for character training. The test dataset consists of 300 images that encompass various complex environments such as distant views, tilting, blurriness, dim lighting, and more. The accuracy of license plate localization reaches 98.5%, while for reasonably captured plates within a three-meter range, it exceeds 99%. The character recognition accuracy is 99.3%. Additionally, the recall for license plate localization is 98.1%, and the F1-Score is 98.3%. For character recognition, the recall is 98.6%, and the F1-Score is 98.95%.
    As YOLOv7 utilizes the PyTorch deep learning framework, efforts were made to ensure compatibility with mobile devices, necessitating model lightweighting to alleviate the computational burden on Central Processing Unit (CPU). Consequently, the models underwent conversion across multiple deep learning frameworks, including Open Neural Network Exchange (ONNX), TensorFlow (TF), and TensorFlow Lite (TFLite).Finally, an Android Studio application was developed to deploy the system, achieving a 99% accuracy in license plate recognition for license plates reasonably captured within a three-meter range, and enabling real-time detection on an Android mobile industrial computer.
    Appears in Collections:[Graduate Institute of Electrical Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML30View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明