博碩士論文 107521088 完整後設資料紀錄

DC 欄位 語言
DC.contributor電機工程學系zh_TW
DC.creator李冠達zh_TW
DC.creatorKuan-Ta Leeen_US
dc.date.accessioned2021-1-22T07:39:07Z
dc.date.available2021-1-22T07:39:07Z
dc.date.issued2021
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=107521088
dc.contributor.department電機工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract條碼在一般大眾的生活中隨處可見,不同的領域為了各自的需求而製作出符合自身產品的條碼,導致條碼的種類變得非常多,所以僅使用單一方法將所有條碼找出並不容易。隨著近年來深度學習在物件辨識領域有大幅度的進步,因此本研究的目標是利用深度學習來找出條碼。 由於所需要的系統只需要將每一個條碼定位為感興趣區域 (Region Of Interest, ROI)而不必加以分類,為了使系統的執行速度夠快,因此所使用的是較小且計算較快的YOLOv3-tiny的網路架構。訓練圖片來源為利用專業掃描器1504P進行影像蒐集,總共蒐集10008張影像,並將影像以8:2分成訓練資料以及驗證資料,再由欣技資訊提供133張測試影像,而從驗證資料以及測試資料的結果顯示Recall皆可以達到95%,而Precision為93%以及75%。 為了要在處理效能有限的個人數位助理中(PDA)執行,因此我們嘗試了剪枝網路,但效果不佳,而後嘗試解析網路並用影像處理模擬網路行為。將網路視覺化的結果只能看出一些大概的處理過程,最後我們嘗試模擬重要特徵圖,並提出三種模擬方法,第一種方法為先找出條碼範圍,再利用5乘5遮罩找出條碼中心,最後利用主動式輪廓法來框選條碼,而框選的條碼即為ROI,第二種方法在找出條碼範圍以及框選條碼為ROI的步驟與第一種方法相同,但使用較小的3乘3遮罩找出條碼中心,第三種為第二種方法執行完成後,再對ROI進一步篩選,以降低誤判區域的數量,將這三種方法利用由欣技資訊所提供的測試集進行測試的結果顯示,Recall分別為83%、92%以及91%,Precision為81%、46%以及79%,而放入個人數位助理的執行速度為118ms、84ms以及156ms,與其他方法相比,我們的方法在準確度上大概落於平均水準,但在執行速度上有著極大的優勢,因此我們的演算法在速度上是相對有競爭力的。zh_TW
dc.description.abstractBarcodes are ubiguitous in modern life. Different types of barcodes are designed for different applications. Therefore, it is not easy to detect all types of barcodes using a single approach. In recent years, object detection in deep learning has achieved significant progresses, so this research aims to locate barcodes using deep learning. In order to efficiently execute the system, which only needs to locate the barcode as a region of interest (ROI) without recognizing the type of each barcode, the simple and fast YOLOv3-tiny network has been chosen. Images used for training were captured by the professional scanner 1504P. The number of images were 10008 and then further divided into training data and verification data in 8:2. The 133 data for testing were provided by CipherLab. The results of verification data and testing data shown that the recall could reach 95%, and the precision was 93% and 75%. To implement the network in a resource-limited Personal Digital Assistant (PDA), we tried to prune the network, but the performance was not good. Hense we analyzed the network structure and used image processing techniques to imitate the network behavior. By visualizing the network, only coarse processing procedure could be identified. Finally, we tried to imitate some important feature maps with three methods. The first method searched for barcode candidates, located the centers of the barcodes using the 5 5 mask, and then used the active contour technique to frame up the barcode in an ROI. The second method was similar to the first method in finding barcode candidates and output ROI except using the smaller 3 3 mask to search for the center of barcodes in the middle step. The third method extended the second method with an additional processing stage, which filtered the ROI to reduce the number of erroneously detected areas. The recall and precision of three methods by testing data provided by CipherLab were evaluated. The results for these three methods were 83%, 92%, and 91% in the recall, and 81%、46% and 79% in the precision. The execution time of these three methods took 118ms ,84ms and 156ms in PDA, respectively. These three proposed methods were at similar recall and precision compared to other studies, but with significant improvement in running time. Our algorithms were competitive in execution speed compared to other approaches.en_US
DC.subject物件偵測zh_TW
DC.subjectYOLOzh_TW
DC.subject條碼定位zh_TW
DC.subject感興趣區域zh_TW
DC.subjectObject detectionen_US
DC.subjectYOLOen_US
DC.subjectBarcode localizationen_US
DC.subjectRegion of Interesten_US
DC.title模擬深度學習特徵進行條碼偵測zh_TW
dc.language.isozh-TWzh-TW
DC.titleSimulation of deep learning features used in barcode detectionen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明