博碩士論文 109522097 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:25 、訪客IP:18.224.62.52
姓名 鍾佳男(Jia-Nan Zhong)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 電子元件異常偵測的適應性對抗深度學習系統
(Anomaly Detection for Electronic Components using An Adaptive-Adversarial Deep Learning System)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2027-7-1以後開放)
摘要(中) 近年來深度學習 (deep learning) 技術快速發展,應用於影像的分類、辨識、偵測、與分割等多領域,效果皆取得突破性的進步;但現實生活中許多應用場域異常樣本的取得非常困難,因此無法有充足的資料進行監督式的深度學習訓練,為了解決這樣的問題,只使用正常影像進行訓練的半監督式異常偵測 (anomaly detection) 之深度學習開始發展。
印刷電路板 (printed circuit board, PCB) 上電子元件引腳焊接時的異物、焊錫短路、與元件上的損壞,人工檢測困難,同時伴隨著異常樣本不易取得的問題,因此我們提出了半監督式異常偵測深度學習系統,偵測電子元件是否異常,並同時標示出可能異常的區塊。
我們的異常偵測系統改自於Skip-GANomaly與DFR異常偵測網路,根據應用需求,考慮是否能使用異常資訊參與訓練提升效果,我們提出了兩個網路,分別是只使用正常影像參與訓練的GANomaly-like DA網路,與可加入少量異常資訊提升效果的GAN DFR light網路。GANomaly-like DA主要的改進部份有:i. 移除Skip-GANomaly生成器跳接,使生成器重建的影像更能區別出正常與異常;ii. 在生成器中加入注意力模組,使網路關注重要特徵;iii. 損失函數調整;iv. 提出自適應調整生成器與判別器訓練次數方法,減緩訓練時生成器與判別器失衡問題,使生成對抗網路訓練能夠更穩定。GAN DFR light主要改進部份有:i. DFR加入判別器,以生成對抗網路方式進行訓練;ii. 修改DFR生成器架構中的卷積與縮減通道數,加快運算速度並維持效果。
在實驗中,我們主要以印刷電路板電子元件引腳間的影像進行訓練及驗證;正常影像共有46,651張,異常影像共有211張,我們將正常影像分為訓練集4,665張,驗證集41,986張,異常影像全為驗證集。以未改進前的Skip-GANomaly偵測異常,得到驗證樣本的篩除率87.57%,召回率86.73%,AUCROC 0.96;經過架構調整、訓練方法改進、自適應調整訓練次數、與異常分數計算比重調整後,最終GANomaly-like DA偵測異常的驗證樣本之篩除率提升至99.98%,召回率提升至100%,AUCROC提升至1.0。而使用GAN DFR light利用部份異常影像輔助異常偵測,可將驗證樣本的篩除率提升至100%,召回率提升至100%,AUCROC 提升至1.0。
針對首次提出的自適應調整訓練次數方法,我們還額外蒐集了積體電路外表損壞之影像進行實驗;正常影像共6,498張,異常影像共163張,正常影像分為訓練集3,248張,驗證集3,250張,異常影像全為驗證集。透過比較訓練中的對抗損失,可發現針對不同資料集,加入自適應調整訓練次數方法後,只要藉由適當的參數調整,就能使生成對抗網路的訓練更加穩定,最終得到驗證樣本的篩除率99.75%,召回率100%,AUCROC 0.999。
摘要(英) In recent years, the rapid development of deep learning methods has been applied to various fields such as image classification, recognition, object detection and segmentation, and gets the breakthrough progress. However, it is difficult to obtain sufficient abnormal samples in some applications in real life for supervised deep learning methods. In order to solve this problem, semi-supervised deep learning anomaly detection methods were developed.
The damage of electronic components on the printed circuit board (PCB) is difficult to detect manually, and is also accompanied by the problem that abnormal samples are not easy to obtain. Therefore, we propose to use an semi-supervised deep learning anomaly detection method to detect whether the electronic components are abnormal and mark the abnormal area.
Our methods are modified from Skip-GANomaly and DFR. According to using anomaly information to participate in training or not, we propose GANomaly-like DA and GAN DFR light. The main improvements of GANomaly-like DA are: i. Remove skip-connections between encoder and decoder, so that the images generated by the generator can be better distinguished between normal and abnormal, ii. Add an attention module to the generator, so that the network can pay attention to the features that are more important, iii. Adjust loss function, iv. Adaptively adjust the frequency of training between the generator and the discriminator to make the training of GAN more stable. The main improvements of GAN DFR light are: i. Add a discriminator to DFR, ii. Modify the convolutional layers and reduce the number of channels in the generator in DFR to reduce the execution time and maintain the effect.
In the experiment, we used images of the gap between the pins of electronic components on the printed circuit board for training and validation. After the structure adjustment, the training methods improvement, adaptively adjusting the frequency of training between the generator and the discriminator and the adjustment of the abnormal score calculation, GANomaly-like DA increased the specificity from 87.57% to 99.98%, the recall from 86.73% to 100% and the AUCROC score from 0.96 to 1.0. By using some abnormal images to participate in training, GAN DFR light increased the specificity from 87.57% to 100%, the recall from 86.73% to 100%, and the AUCROC score from 0.96 to 1.0.
For the first proposed method of adaptively adjusting the frequency of training between the generator and the discriminator, we additionally collected images of damaged IC for experiments. By comparing the adversarial loss during training, we confirmed that for different data sets, adaptively adjusting the frequency of training between the generator and the discriminator can make the training of GAN more stable, and finally we obtained the results of the specificity of 99.75%, the recall of 100%, and the AUCROC score of 0.999.
關鍵字(中) ★ 異常偵測
★ 生成對抗網路
關鍵字(英) ★ anomaly detection
★ GAN
論文目次 摘要..................................ii
Abstract..............................iv
致謝..................................vi
目錄..................................vii
圖目錄................................ix
表目錄................................xi
第一章 緒論..........................1
1.1 研究動機.......................1
1.2 系統架構.......................2
1.3 系統特色.......................4
1.4 論文架構.......................5
第二章 相關研究.......................6
2.1 異常偵測的深度學習..............6
2.2 注意力機制.....................17
第三章 改進的異常偵測網路..............23
3.1 Skip-GANomaly網路架構..........23
3.2 基於Skip-GANomaly網路的修改....26
3.3 損失函數.......................28
3.4 DFR網路架構....................28
3.5 自適應調整訓練次數..............34
3.6 異常分數計算與異常區塊顯示.......38
第四章 實驗..........................40
4.1 實驗設備與開發環境..............40
4.2 異常偵測網路的訓練..............40
4.3 評估準則.......................42
4.4 實驗結果.......................44
第五章 結論與未來展望..................58
參考文獻..............................59
參考文獻 [1] A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. of Neural Information Processing Systems (NIPS), Harrahs and Harveys, Lake Tahoe, NV, Dec.3-8, 2012, pp.1106-1114.
[2] S. Akçay, A. Atapour-Abarghouei, and T. P. Breckon, “GANomaly: semi-supervised anomaly detection via adversarial training,” arXiv:1805.06725.
[3] S. Akçay, A. Atapour-Abarghouei, and T P. Breckon, “Skip-GANomaly: skip connected and adversarially trained encoder-decoder anomaly detection,” arXiv:1901.08954.
[4] J. Yang, Y. Shi, and Z. Qi, “DFR: deep feature reconstruction for unsupervised anomaly segmentation,” arXiv:2012.07122.
[5] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661.
[6] M. Zeiler, D. Krishnan, G. Taylor, and R. Fergus, “Deconvolutional networks,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, Jun.13-18, 2010, pp.2528-2535.
[7] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv:1511.06434.
[8] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for scene segmentation,” arXiv:1809.02983v4.
[9] C.-Y. Liou, J.-C. Huang, and W.-C. Yang, “Modeling word perception using the Elman network,” Neurocomputing, vol.71, no.16-18, pp.3150-3157, Oct.2008.
[10] D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” arXiv:1312.6114.
[11] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther, “Autoencoding beyond pixels using a learned similarity metric,” arXiv:1512.09300.
[12] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv:1511.05644v2.
[13] L. Bergman, N. Cohen, and Y. Hoshen, “Deep nearest neighbor anomaly detection,” arXiv:2002.10445v1.
[14] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, “ImageNet: a large-scale hierarchical image database,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Miami, FL, Jun.20-25, 2009, pp.248-255.
[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, Jun.27-30, 2016, pp.770-778.
[16] T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Trans. on Information Theory, vol.13, no.1, pp.21-27, 1967.
[17] N. Cohen and Y. Hoshen, “Sub-image anomaly detection with deep pyramid correspondences,” arXiv:2005.02357v3.
[18] T. Defard, A. Setkov, A. Loesch, and R. Audigier, “PaDiM: a patch distribution modeling framework for anomaly detection and localization,” arXiv:2011.08785v1.
[19] J. L. Suárez-Díaz, S. García, and F. Herrera, “A tutorial on distance metric learning: mathematical foundations, algorithms, experimental analysis, prospects and challenges with appendices on mathematical background and detailed algorithms explanation,” arXiv:1812.05944v3.
[20] K. Roth, L. Pemula, J. Zepeda, B. Schölkopf, T. Brox, and P. Gehler, “Towards total recall in industrial anomaly detection,” arXiv:2106.08265.
[21] D. Feldman, “Introduction to coresets: an updated survey,” arXiv:2011.09384v1.
[22] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” arXiv:1505.04597v1.
[23] D. Gong, L. Liu, V. Le, B. Saha, M. R. Mansour, S. Venkatesh, and A. van den Hengel, “Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection,” in Proc. IEEE Conf. on International Conference on Computer Vision (ICCV), Seoul, Korea (South), Oct.27-Nov.2, 2019, pp.1705-1714
[24] P. Bergmann, S. Lowe, M. Fauser, D. Sattlegger, and C. Steger, “Improving unsupervised defect segmentation by applying structural similarity to autoencoders,” arXiv:1807.02011v3.
[25] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Processing, vol.13, no.4, pp.600-612, 2004.
[26] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556.
[27] P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger, “Mvtec ad: a comprehensive real-world dataset for unsupervised anomaly detection,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, Jun.15-20, 2019, pp.9592-9600.
[28] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs “Unsupervised anomaly detection with generative adversarial networks to guide marker discovery,” arXiv:1703.05921v1.
[29] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in Proc. of Neural Information Processing Systems (NIPS), Barcelona, Spain, Dec.5-10, 2016, III-A1,pp.2234-2242.
[30] P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Jul.21-26, 2017, pp.5967-5976.
[31] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-excitation networks,” arXiv:1709.01507v4.
[32] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q.-V. Le, and H. Adam, “Searching for MobileNetV3,” arXiv:1905.02244.
[33] S. Woo, J. Park, J.-Y. Lee, and I. Kweon, “CBAM: convolutional block attention module,” arXiv:1807.06521v2.
[34] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv:1505.00853v2.
[35] S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167v3.
[36] A. F. M. Agarap, “Deep learning using rectified linear units (ReLU),” arXiv:1803.08375v2.
[37] D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980v9.
[38] Z. Zhang, T. He, H. Zhang, Z. Zhang, J. Xie, and M. Li, “Bag of freebies for training object detection neural networks.” arXiv:1902.04103v3.
指導教授 曾定章(Din-Chang Tseng) 審核日期 2022-8-4
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明