博碩士論文 109522105 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:14 、訪客IP:3.15.229.140
姓名 李亭儀(Ting-Yi Li)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 電子元件影像匹配的深度學習系統
(A Deep Learning System for Matching Electronic- component Images)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2027-7-1以後開放)
摘要(中) 隨著科技的日新月異,使用到電子元件的產品也越來越多。為了大量生產,自動化的生產方式已是時代的潮流。電子元件產品幾乎都會用到印
刷電路板 (printed circuit board, PCB),而印刷電路板上有大大小小,不同形狀的電子元件,這些微小電子元件若使用人工搜尋,會耗費大量的人力及時間成本,因此以機器代替人工搜尋是必然的趨勢。自動化的搜尋方式可以將要搜尋的元件做成模板影像,再以匹配 (matching) 的技術搜尋待測大影像中的相同元件。
過去對於印刷電路板的特徵匹配已有許多相關研究,但其大多為傳統演算法,對於不同種類的電子元件不能有效的偵測出來,且也容易受到元件變異與背景的影響。因此,在本研究中,我們提出以卷積神經網路為基礎的匹配網路來彌補上述之缺點。藉由輸入相同電子元件種類之影像作為模板影像 (template image),並在搜尋影像 (search image) 上尋找是否有相匹配之元件。匹配只需要偵測搜尋影像上是否有與模板影像匹配的部份,不需要學習電子元件之種類,能廣泛的加以應用。
在本研究中,我們修改單一追蹤網路的 SiamCAR 網路來做為我們的
匹配網路,主要修改內容為:i. 將單一目標追蹤改成多個相似物件之匹
配,ii. 損失函數的改進。另外,在訓練中,我們還加入影像前處理、及
學習率策略等附加處置。
在實驗中,我們共收集了 851 張電子元件的影像資料,其中訓練集有 794 張,總共有 4,601 對影像,驗證集有 57 張,總共有 291 對影像。我們以 1,200×1,200 解析度的影像測試,最終改進的精密度 (precision) 為98.33%,召回率 (recall) 為 94.60%。與原本的 SiamCAR 網路比較,精密度從 84.81%提升到 98.33%,提升 13.52%,召回率從 66.02%提升到 94.60%,提升 28.58%。
摘要(英) With the progress of the times, science and technology change rapidly, more and more products use electronic components. For mass production, automated production methods have become the trend of the times. Almost all
electronic component products use printed circuit boards (PCBs), and there are large and small electronic components of different shapes on the printed circuit board. If these tiny electronic components use manual search, it will consume a lot of manpower and time costs, so it is an inevitable trend to replace manual search with machines. The automatic search method can make the components to be searched into a template image, and then use the matching technology to search for the same components in the large image to be tested.
In the past, there have been many related studies on feature matching of printed circuit boards, but most of them are traditional algorithms, which cannot effectively detect different types of electronic components, and are also
easily affected by component variation and background. Therefore, in this study, we propose a convolutional neural network-based matching network to remedy the above shortcomings. By inputting an image of the same electronic
component type as a template image, and searching for a matching component on the search image. Matching only needs to detect whether there is a part of the search image that matches the template image, and does not need to learn
the types of electronic components, which can be widely used.
In this study, we modify the SiamCAR architecture of a single tracking network as our matching network. The main modifications include: i. Change single target tracking to matching of multiple similar objects; ii. Improvement
of loss function. In addition, in training, we also add additional processing such as image preprocessing and learning rate strategy.
In the experiment, we collected 851 images of electronic components, including 794 in the training set, with 4,601 pairs of images, and 57 in the validation set, with 291 pairs of images. We tested with 1,200×1,200 resolution images, and the final improved precision is 98.43%, and the recall is 96.01%.Compared with the original SiamCAR network, the precision increased from 84.81% to 98.33%, an increase of 13.52%, and the recall increased from 66.02% to 94.60%, an increase of 28.58%.
關鍵字(中) ★ 匹配
★ 深度學習
關鍵字(英)
論文目次 摘要.................................. ii
Abstract ............................ iii
目錄................................... v
圖目錄............................... viii
表目錄.................................. x
第一章 緒論............................. 1
1.1 研究動機............................ 1
1.2 系統架構............................ 2
1.3 系統特色............................ 4
1.4 論文架構............................ 5
第二章 相關研究.......................... 6
2.1 特徵匹配............................ 6
2.2 單一目標追蹤網路..................... 8
2.3 損失函數............ ............... 13
第三章 電子元件匹配網路.................. 18
3.1 SiamCAR 網路架構.................... 18
3.2 模板影像和搜尋影像的製作.............. 26
3.3 相似物件之匹配....................... 28
3.4 損失函數修改......................... 29
第四章 實驗.............................. 32
4.1 實驗設備與開發環境.................... 32
4.2 匹配網路的訓練........................ 32
4.3 評估準則.............................. 36
4.4 實驗結果.............................. 37
第五章 結論與未來展望...................... 49
參考文獻.................................. 50
參考文獻 [1] D. Guo, J. Wang, Y. Cui , Z. Wang, and S. Chen, “SiamCAR: siamese fully convolutional classification and regression for visual tracking,” arXiv:1911.07241v2.
[2] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah, “Signature verification using a "Siamese" time delay neural network,” in Proc. 6th Int. Conf. on NIPS, Denver, Colorado, Nov.29 - Dec.2, 1993, pp.737-744.
[3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,”arXiv:1512.03385.
[4] T. Dekel, S. Oron, M. Rubinstein, S. Avidan, and W. T. Freeman, “Bestbuddies similarity for robust template matching,” in Proc. of the IEEE Conf. on CVPR, Boston, MA, Jun.7-12, 2015, pp.2021-2029.
[5] I. Talmi, R. Mechrez, and L. Zelnik-Manor, “Template matching with deformable diversity similarity,” arXiv:1612.02190v2.
[6] R. Kat, R. Jevnisek, and S. Avidan, “Matching pixels using cooccurrence statistics,” in Proc. of the IEEE Conf. on CVPR, Salt Lake City, UT, Jun.18-23, 2018, pp.1751-1759.
[7] J. Cheng, Y. Wu, W. Abd-Almageed, and P. Natarajan, “QATM: qualityaware template matching for deep learning,” arXiv:1903.07254v2.
[8] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556v6.
[9] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. S. Torr, “Fully-convolutional siamese networks for object tracking,” arXiv:1606.09549v2.
[10] J. Valmadre, L. Bertinetto, J. F. Henriques, A. Vedaldi, and P. H. S. Torr, “End-to-end representation learning for correlation filter based tracking,” arXiv:1704.06036.
[11] Q. Wang, J. Gao, J. Xing, M. Zhang, and W. Hu, “DCFNet: discriminant correlation filters network for visual tracking,” arXiv:1704.04057.
[12] Q. Guo, W. Feng, C. Zhou, R. Huang, L. Wan, and S. Wang, “Learning dynamic siamese network for visual object tracking,” in Proc. of the IEEE Conf. on CVPR, Venice, Italy, Oct.22-29, 2017, pp.1781-1789.
[13] A. He, C. Luo, X. Tian, and W. Zeng, “A twofold siamese network for real-time object tracking,” arXiv:1802.08817.
[14] B. Li, J. Yan, W. Wu, Z. Zhu, and X. Hu, “High performance visual tracking with siamese region proposal network,” in Proc. of the IEEE Conf. on CVPR, Salt Lake City, UT, Jun.18-23, 2018, pp.8971-8980.
[15] Z. Zhu, Q. Wang, B. Li, W. Wu, J. Yan, and W. Hu, “Distractor-aware siamese networks for visual object tracking,” arXiv:1808.06048.
[16] B. Li, W. Wu Q. Wang, F. Zhang, J. Xing, and J. Yan, “SiamRPN++: evolution of siamese visual tracking with very deep networks,” arXiv:1808.06048.
[17] Q. Wang, L. Zhang, L. Bertinetto, W. Hu, and P. H. S. Torr, “Fast online object tracking and segmentation: a unifying approach,” arXiv:1812.05050v2.
[18] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards realtime object detection with region proposal networks,” arXiv:1506.01497v3.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. of NIPS, Lake Tahoe, Nevada, Dec.3-8, 2012, pp.1-9.
[20] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” arXiv:1703.06870v3.
[21] R. Girshick, “Fast R-CNN,” arXiv:1504.08083v2.
[22] D. Zhou, J. Fang, X. Song, C. Guan, J. Yin, Y. Dai, and R. Yang, “IoU loss for 2D/3D object detection,” arXiv:1908.03851.
[23] H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized intersection over union: a metric and a loss for bounding box regression,”arXiv:1902.09630v2.
[24] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-IoU loss: faster and better learning for bounding box regression,” arXiv:1911.08287.
[25] H. Fan and H. Wang, “Siamese cascaded region proposal networks for real-time visual tracking,” arXiv:1812.06148.
[26] G. Wang, C. Luo, Z. Xiong, and W. Zeng, “SPM-tracker: series-parallel matching for real-time visual object tracking,” arXiv:1904.04452.
[27] Q. Wu, Y. Yan, Y. Liang, Y. Liu, and H. Wang, “DSNet: deep and shallow feature learning for efficient visual tracking,” arXiv:1811.02208.
[28] Z. Zhang and H. Peng, “Deeper and wider Siamese networks for realtime visual tracking,” arXiv:1901.1660v3.
[29] D. Gordon, A. Farhadi, and D. P. Fox, “Re3: Real-time recurrent regression networks for visual tracking of generic objects,” arXiv:1705.06368v3.
[30] D. Vaghela and P. K. Naina, “A review of image mosacing techniques,” arXiv:1405.2539.
[31] A. Alaei and M. Delalandre, “A complete logo detection/recognition system for document images,” in Proc. of IAPR, Tours, France, April 7-10, 2014, pp.324-328.
[32] A. Malti, R. Hartley, A. Bartoli, and J. Kim “Monocular template-based 3D reconstruction of extensible surfaces with local linear elasticity,” in Proc. of the IEEE Conf. on CVPR, Portland, OR, Jun.23-28, 2013, pp.1522-1529.
[33] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.32, no.9, pp.1627-1645, 2010.
[34] A. Neubeck and L. V. Gool, “Efficient non-maximum suppression,” in Proc. of 18th Int. Conf. on Pattern Recognition (ICPR), Hong Kong,Aug.20-24, 2006, pp.850-855.
[35] Z. Tian, C. Shen, H. Chen, and T. He, “FCOS: fully convolutional onestage object detection,” arXiv:1904.01355v5
指導教授 曾定章(Din-Chang Tseng) 審核日期 2022-8-6
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明