English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41077572      線上人數 : 989
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/85095


    題名: 模擬深度學習特徵進行條碼偵測;Simulation of deep learning features used in barcode detection
    作者: 李冠達;Lee, Kuan-Ta
    貢獻者: 電機工程學系
    關鍵詞: 物件偵測;YOLO;條碼定位;感興趣區域;Object detection;YOLO;Barcode localization;Region of Interest
    日期: 2021-01-22
    上傳時間: 2021-03-18 17:39:15 (UTC+8)
    出版者: 國立中央大學
    摘要: 條碼在一般大眾的生活中隨處可見,不同的領域為了各自的需求而製作出符合自身產品的條碼,導致條碼的種類變得非常多,所以僅使用單一方法將所有條碼找出並不容易。隨著近年來深度學習在物件辨識領域有大幅度的進步,因此本研究的目標是利用深度學習來找出條碼。
    由於所需要的系統只需要將每一個條碼定位為感興趣區域 (Region Of Interest, ROI)而不必加以分類,為了使系統的執行速度夠快,因此所使用的是較小且計算較快的YOLOv3-tiny的網路架構。訓練圖片來源為利用專業掃描器1504P進行影像蒐集,總共蒐集10008張影像,並將影像以8:2分成訓練資料以及驗證資料,再由欣技資訊提供133張測試影像,而從驗證資料以及測試資料的結果顯示Recall皆可以達到95%,而Precision為93%以及75%。
    為了要在處理效能有限的個人數位助理中(PDA)執行,因此我們嘗試了剪枝網路,但效果不佳,而後嘗試解析網路並用影像處理模擬網路行為。將網路視覺化的結果只能看出一些大概的處理過程,最後我們嘗試模擬重要特徵圖,並提出三種模擬方法,第一種方法為先找出條碼範圍,再利用5乘5遮罩找出條碼中心,最後利用主動式輪廓法來框選條碼,而框選的條碼即為ROI,第二種方法在找出條碼範圍以及框選條碼為ROI的步驟與第一種方法相同,但使用較小的3乘3遮罩找出條碼中心,第三種為第二種方法執行完成後,再對ROI進一步篩選,以降低誤判區域的數量,將這三種方法利用由欣技資訊所提供的測試集進行測試的結果顯示,Recall分別為83%、92%以及91%,Precision為81%、46%以及79%,而放入個人數位助理的執行速度為118ms、84ms以及156ms,與其他方法相比,我們的方法在準確度上大概落於平均水準,但在執行速度上有著極大的優勢,因此我們的演算法在速度上是相對有競爭力的。
    ;Barcodes are ubiguitous in modern life. Different types of barcodes are designed for different applications. Therefore, it is not easy to detect all types of barcodes using a single approach. In recent years, object detection in deep learning has achieved significant progresses, so this research aims to locate barcodes using deep learning.
    In order to efficiently execute the system, which only needs to locate the barcode as a region of interest (ROI) without recognizing the type of each barcode, the simple and fast YOLOv3-tiny network has been chosen. Images used for training were captured by the professional scanner 1504P. The number of images were 10008 and then further divided into training data and verification data in 8:2.
    The 133 data for testing were provided by CipherLab. The results of verification data and testing data shown that the recall could reach 95%, and the precision was 93% and 75%.
    To implement the network in a resource-limited Personal Digital Assistant (PDA), we tried to prune the network, but the performance was not good. Hense we analyzed the network structure and used image processing techniques to imitate the network behavior. By visualizing the network, only coarse processing procedure could be identified. Finally, we tried to imitate some important feature maps with three methods. The first method searched for barcode candidates, located the centers of the barcodes using the 5 5 mask, and then used the active contour technique to frame up the barcode in an ROI. The second method was similar to the first method in finding barcode candidates and output ROI except using the smaller 3 3 mask to search for the center of barcodes in the middle step. The third method extended the second method with an additional processing stage, which filtered the ROI to reduce the number of erroneously detected areas. The recall and precision of three methods by testing data provided by CipherLab were evaluated. The results for these three methods were 83%, 92%, and 91% in the recall, and 81%、46% and 79% in the precision. The execution time of these three methods took 118ms ,84ms and 156ms in PDA, respectively. These three proposed methods were at similar recall and precision compared to other studies, but with significant improvement in running time. Our algorithms were competitive in execution speed compared to other approaches.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML192檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明