博碩士論文 108522087 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:42 、訪客IP:3.137.211.153
姓名 吳怡萱(I-Hsuan Wu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 深度學習的印刷電路板上之電子元件相似度比對
(Similarity Comparison for Electronic Components on Printed Circuit Boards using Deep Learning)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-7-30以後開放)
摘要(中) 台灣印刷電路板 (printed circuit board, PCB) 產業在國際上極具競爭力;其應用相當廣泛,幾乎所有電子元件都會用到印刷電路板作為基材,組成插件後的插件印刷電路板 (printed circuit board assembly, PCBA)。相關產業競爭極為激烈,因此要保有產品的優勢,產品的品管就十分重要,為了避免產出瑕疵的產品交到消費者手中,插件印刷電路板的瑕疵檢測就成為重要的議題。
在過去的傳統檢測方法中,要獲得較優良的檢測效果,常需要針對特定元件設計出特定的演算法,雖然檢測速度非常快,但缺點是泛用性(generalization) 很低;若出現沒見過的新元件就必須設計新的演算法。近年來深度學習 (deep learning) 技術蓬勃發展,可以有效地提升瑕疵檢測的效果。在本研究中,我們使用深度學習技術比對待測電子元件影像與母版中沒有瑕疵的電子元件影像之相似度,來判斷待測電子元件是否有瑕疵。其困難點在於瑕疵與非瑕疵的界線很難定義;例如,元件錯置、顏色不同、大旋轉、大位移才是瑕疵,明暗變化、顏色微變、背景變異、小旋轉、小位移等都不是瑕疵。雖然不易以傳統觀念來定義判斷界線,但藉由樣本的學習來獲得準確的判斷準則是相當可行的方法。
在本研究中,我們修改雙胞胎網路 (Siamese network) 架構來做為我們的比對網路,主要修改內容為:i. 特徵擷取子網路使用殘差網路(ResNet-18),ii. 修改 ResNet-18 成先激發 (pre-activation) 模組,iii. 加入注意力模組 (attention modules),以提高網路的學習效果,iv. 依特定需求,使用與位置相關的特徵比對相似度,v. 提出新的損失函數,配合新特徵的比對度量。另外,在訓練中,我們還加入影像前處理、影像擴增、學習率策略、及影像隨機位移等附加處置。在實驗中,我們共收集了 3,528 對印刷電路板上的元件影像樣本,其中 1,697 對為相似影像,1,831 對為不相似影像。相似影像分為訓練樣本1,547 對,驗證樣本 150 對;不相似影像分為訓練樣本 1,681 對,驗證樣本 150 對。使用影像擴增方法將訓練集的樣本總數提高至 16,140 對,驗證集的樣本總數提高至 1,500 對。
實驗結果顯示,特徵擷取子網路使用原始 ResNet-18 的訓練集精密度 (precision) 為 78.88%,召回率 (recall) 為 62.70%,驗證集精密度為84.32%,召回率為 64.53%;在使用新的損失函數,配合新特徵的比對度量和加入影像前處理、影像擴增、學習率策略、及隨機位移影像的環境下,使用經過修改和添加模組後的神經網路架構,其訓練集最終的精密度達到 100.00%,召回率達到 100.00%,驗證集最終的精密度達到 100.00%, 召回率達到 99.33%。
摘要(英) The printed circuit board (PCB) industry of Taiwan is extremely competitive in the world. The PCB’s applications are quite extensive; almost all electronic components are soldered on the PCBs to work. A soldered PCB is called a printed circuit board assembly (PCBA). The market of PCBA products is very fierce; in order to keep the product advantages, product quality control is very important. To prevent defective products being delivered to consumers, defect detection of PCBAs has become an important issue.
In the traditional detection methods, we always need to design specific algorithms for special components to obtain better detection results. Although the detection speed is very fast, the disadvantage is that the generalization is very low. If unobserved components are appeared, algorithms must be redesigned. In recent years, deep learning techniques have been developed vigorously and have revealed that they can effectively improve the performance of variant applications. In this study, we use the deep learning technique to compare the similarity between the electronic components on a test PCB and the related master PCB to find defect components on the test PCB. The similarity criterion is hard to define; for example, different components, different color, large rotation, and large shift are defects; little change on brightness, color, rotation, shift, and background change are not defect. Although the criterion is hard to define from the traditional concept, using learning methodology from training samples to generate the criterion is very practical.
In this study, we modified the Siamese network to produce our comparison network; the main modification includes: i. the feature extraction subnet was changed to ResNet-18; ii. ResNet-18 was re-organized as a pre-activation architecture; iii. adding attention modules to improve the learning effect; iv. based on the special requirements, location-related features are defined and used to compare the similarity; v. a new loss function is proposed to match the special features. Moreover, extra processes such as contrast enhancement, data augmentation, learning rate strategies, and random shift are made in training.
In the experiments, we collected 3,528 pairs of electronic component images on PCBs as samples; in which, 1,697 pairs are similar images and 1,831 pairs are dissimilar images. Among them, 1,547- and 150-pair similar images are respectively for training and verification; 1,681- and 150-pair dissimilar images are respectively for training and verification. All images were augmented into 5 times to generate totally 16,140 training pairs and 1,500 validation pairs.
The experimental results show that the precision of the training set of the feature extraction subnet using the original ResNet-18 is 78.88%, the recall is 62.70%, and the precision of the validation set is 84.32%, the recall is 64.53%. In the environment of using a new loss function with a new feature comparison metric and adding image pre-processing, data augmentation, learning rate strategies, and random displacement images. After the neural network architecture was modified and modules were added, the final precision of the training set reached 100.00%, the recall reached 100.00%, and the final precision of the verification set reached 100.00%, the recall reached 99.33%.
關鍵字(中) ★ 深度學習
★ 相似度比對
關鍵字(英) ★ deep learning
★ similarity comparison
論文目次 摘要 ii
Abstract iv
致謝 vi
目錄 vii
圖目錄 ix
表目錄 xi
第一章 緒論 1
1.1 研究動機 1
1.2 系統概述 4
1.3 論文特色 6
1.4 論文架構 7
第二章 相關研究 8
2.1 影像相似度 8
2.2 卷積神經網路的影像相似度比對 11
2.3 注意力機制 15
第三章 電子元件的相似度比對網路 20
3.1 雙胞胎網路架構 20
3.2 特徵提取子網路 21
3.3 損失函數 32
3.4 與位置相關的相似度比對及可視化 35
第四章 實驗與結果 38
4.1 實驗設備與開發環境 38
4.2 相似度比對網路的訓練 38
4.3 評估準則 42
4.4 實驗與結果 44
第五章 結論與未來展望 55
參考文獻 57
參考文獻 [1] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol.60, no.2, pp.91-110, 2004.
[2] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah, “Signature verification using a "Siamese" time delay neural network,” in Proc. 6th Int. Conf. on NIPS, Denver, Colorado, Nov.29 - Dec.2, 1993, pp.737-744.
[3] S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, June 20-25, pp.539-546.
[4] K. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics Gems, Academic Press, Amsterdam, pp.474-485, 1994, Ch.VIII.5.
[5] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv:1512.03385.
[6] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” in IEEE Transactions on Image Processing, vol.13, no.4, pp.600-612, 2004.
[7] M. Heusel, H. Ramsauer, T. Unterthiner, and B. Nessler, “GANs trained by a two time-scale update rule converge to a local Nash equilibrium,” arXiv:1706.08500v6.
[8] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” arXiv:1707.02921v1.
[9] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” arXiv:1511.04587v2.
[10] J. Yu, Y. Fan, J. Yang, N. Xu, Z. Wang, and X. Wang, “Wide activation for efficient and accurate image super-resolution,” arXiv:1808.08718v2.
[11] M. Haris, G. Shakhnarovich, and N. Ukita, “Deep back-projection networks for super-resolution,” arXiv:1803.02735v1.
[12] E. Tola, V. Lepetit, and P. Fua, “A fast local descriptor for dense matching,” in Proc. IEEE Conf. of Computer Vision and Pattern Recognition, Anchorage, AK, June 23-28, 2008, pp.1-8.
[13] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. Alvey Vision Conference, Manchester, Aug. 31-Sep. 2, pp.147-151.
[14] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, vol.86, no.11, pp.2278-2324, Nov. 1998.
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. of NIPS 2012, Lake Tahoe, Nevada, Dec.3-8, 2012, pp.1-9.
[16] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556v6.
[17] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” arXiv:1505.04597v1.
[18] G. Koch, “Siamese neural networks for one-shot image recognition,” In ICML Deep Learning workshop, 2015.
[19] A. Stylianou, R. Souvenir, and R. Pless, “Visualizing deep similarity networks,” arXiv:1901.00536v1.
[20] E. Hoffer and N. Ailon, “Deep metric learning using triplet network,” arXiv:1412.6622v4.
[21] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Columbus, OH, June 23-28, 2014, pp.1701-1708
[22] S. Woo, J. Park, J.-Y. Lee, and I. Kweon, “CBAM: convolutional block attention module,” arXiv:1807.06521v2.
[23] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-excitation networks,” arXiv:1709.01507v4.
[24] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for scene segmentation,” arXiv:1809.02983v4.
[25] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.-N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” arXiv:1706.03762v5.
[26] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Self-attention generative adversarial networks,” arXiv:1805.08318v2.
[27] Y. Chen, Y. Kalantidis, J. Li, S. Yan, and Ji. Feng, “A2-Nets: double attention networks,” arXiv:1810.11579v1.
[28] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu, “ECA-Net: efficient channel attention for deep convolutional neural networks,” arXiv:1910.03151v4.
[29] Z. Gao, J. Xie, Q. Wang, and P. Li, “Global second-order pooling convolutional networks,” arXiv:1811.12006v2.
[30] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual retworks,” arXiv:1603.05027v3.
[31] R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality Reduction by Learning an Invariant Mapping,” in Proc. IEEE Conf. of Computer Vision and Pattern Recognition, New York, USA, June.17-22, 2006, pp.1735-1742.
[32] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” arXiv:1512.04150v1.
[33] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” arXiv:1610.02391v4.
[34] D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980.
指導教授 曾定章 審核日期 2021-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明