博碩士論文 108522064 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:14 、訪客IP:3.144.248.24
姓名 林沂蓉(Yi-rong Lin)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於GANomaly方法優化之工業產品瑕疵檢測模型
(An Optimized Defect Detection Method for Industrial Products Based on GANomaly)
相關論文
★ 基於虹膜色彩空間的極端學習機的多類型頭痛分類★ 以多分數加權融合方式進行虹膜影像品質檢定
★ 基於深度學習之工業用智慧型機器視覺系統:以文字定位與辨識為例★ 基於深度學習的即時血壓估測演算法
★ 基於深度學習之工業用智慧型機器視覺系統:以焊點品質檢測為例★ 基於pix2pix深度學習模型之條件式虹膜影像生成架構
★ 以核方法化的相關濾波器之物件追蹤方法 實作眼動儀系統★ 雷射都普勒血流原型機之驗證與校正
★ 以生成對抗式網路產生特定目的影像—以虹膜影像為例★ 一種基於Faster R-CNN的快速虹膜切割演算法
★ 運用深度學習、支持向量機及教導學習型最佳化分類糖尿病視網膜病變症狀★ 應用卷積神經網路的虹膜遮罩預估
★ Collaborative Drama-based EFL Learning with Mobile Technology Support in Familiar Context★ 可用於自動訓練深度學習網路的網頁服務
★ 基於深度學習方法之高精確度瞳孔放大片偵測演算法★ 基於CNN方法之真假人臉識別模型
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2024-8-31以後開放)
摘要(中) 隨著深度學習的快速發展,將工業生產與深度學習結合,發展智慧製造工廠已經成為產業發展趨勢。對製造產業而言,提升生產線上的良率更是核心議題,除了提升生產技術,製程檢測準確率對於確保生產線的品質與效率更是關鍵議題。儘管現行自動化AOI光學檢測已經泛用於生產線中,然而其嚴苛的門檻值設定,雖然避免了瑕疵產品流入市場,卻導致大量良品被誤判為瑕疵品,導致生產線效能降低。針對AOI檢測的問題,需要額外的人力複檢成本解決此問題。
人力複檢成本高、效率低、且因人眼疲勞問題可能導致準確度下降。為了解決這個問題,我們提出利用深度學習來代替人工複檢程序。然而深度學習在工業檢測上又面臨樣本比例不均、瑕疵樣本種類未知性等問題,造成開發演算法上的困難。我們提出基於半監督式深度學習—GANomaly之優化方法;GANomaly是運用生成對抗式網路解決異常檢測問題的一個方法,然而其準確率與漏檢率尚未能達到應用於產線上的標準,因此我們提出如轉換色彩空間、調整損失函數和修改異常評分公式等研究方向以提升準確率,最後使此優化瑕疵檢測方法在資料集上達到高準確率與低漏檢率的表現。
摘要(英) With the rapid development of deep learning, how to combine industrial manufacture with deep learning in order for developing smart factories has become one of the most recent trends. For the manufacturing industry, how to improve the yield rate of the production line is the core issue. In addition to improving the production techniques, the accuracy of the product defect inspection is a crucial issue to ensure the quality and efficiency of the production line. Automated optical inspection (AOI) is a technique in computer vision that combines image processing and automatic control techniques. In contrast to the traditional way of using optical instruments for product inspection by humans, the AOI techniques can lower the labor cost and shorten the inspection time. Although the current AOI detection techniques have been widely used in production lines, in practical cases, there are still many mis-classified samples which need to be double checked by humans. The reason is that the traditional AOI technique typically applies a hard threshold on extracted features as decision criterions which is not flexible enough to deal with practical manufacturing situations and therefore results in misclassification, which, in turn, increases the cost of the quality inspection.
Defective product confirmation by human labor is expensive, with low efficiency, and mis-classification may happen again in this stage due to eye fatigue. To solve this problem, we propose to use deep learning for the AOI problem. However, deep learning is faced with problems such as unbalanced sample size and unknown types of defective samples in industrial inspection, which causes difficulties in developing algorithms. In this thesis, we propose an optimization method based on the semi-supervised deep learning method: GANomaly. GANomaly is a method of using a generative adversarial network to solve anomaly detection problems, but its accuracy and missed detection rates have not yet reached the standards suitable for manufactural production lines. Therefore, we propose research directions such as color space transformation, the rethinking of the loss functions, and modifying the anomaly score to improve accuracy. Finally, our optimized method can achieve high accuracy and low false-positive rate and outperforms baseline methods like GANomaly and AnoGAN on our dataset.
關鍵字(中) ★ 深度學習
★ 生成對抗式網路
★ 自動編碼器
★ 瑕疵檢測
★ 電腦視覺
★ 影像辨識
關鍵字(英) ★ Deep learning
★ Generative Adversarial Network
★ Autoencoders
★ Defect Detection
★ Computer Vision
★ Image recognition
論文目次 中文摘要 i
Abustract ii
致 謝 iii
目 錄 iv
表目錄 vi
圖目錄 vii
第一章 緒論 1
1-1 研究背景 1
1-2 論文目的 2
1-3 論文架構 3
第二章 文獻回顧 4
2-1. 深度自動編碼器(Deep autoencoder) 4
2-2. 生成對抗式網路(Generative Adversarial Network, GAN) 5
2-3. 對抗自編碼器(Adversarial Autoencoders, AAE) 6
2-4. 深度卷積對抗生成網路(Deep Convolutional Generative Adversarial Networks, DCGAN) 7
2-5. AnoGAN 8
2-6. GANomaly 9
2-7. ColorNet 11
2-8. Color Transform Based Approach for Disease Spot Detection on Plant Leaf 12
2-9. CIELAB[31] 13
第三章 研究內容與方法介紹 15
3-1. 研究方法介紹 15
3-2. GANomaly介紹與異常檢測流程 15
3-3. 優化方法介紹 19
3-4. 螺帽資料集 22
第四章 實驗結果與比較 25
4-1. 前言 25
4-2. 原始論文實驗結果 26
4-3. 轉換色彩空間 30
4-4. 權重調整結果 32
4-5. 更改異常評分公式 33
4-6. 總結 35
第五章 結論與未來展望 37
參考文獻 38
參考文獻 1. Bahrin, M. A. K., Othman, M. F., Azli, N. H. N., & Talib, M. F., “Industry 4.0: A review on industrial automation and robotic.”, Jurnal Teknologi, vol. 78(6-13), 2016.
2. 張頌榮,「TFT-LCD 面板之點線瑕疵自動化檢測系統」,國立成功大學製造工程研究所碩博士班碩士論文,2005。
3. 李柏蒼,「TFT-LCD高階光學檢測設備國產化策略」,國立清華大學高階經營管理碩士在職專班碩士論文。2009。
4. Li, Z., & Yang, Q., “System design for PCB defects detection based on AOI technology.”, In 2011 4th International Congress on Image and Signal Processing, vol. 4, pp. 1988-1991, IEEE, 2011
5. Dai, W., Mujeeb, A., Erdt, M., & Sourin, A., “Soldering defect detection in automatic optical inspection.”, Advanced Engineering Informatics, vol. 43, 2020.
6. Luo, Q., Fang, X., Liu, L., Yang, C., & Sun, Y., “Automated visual defect detection for flat steel surface: A survey.”, IEEE Transactions on Instrumentation and Measurement, vol. 69(3), pp. 626-644, 2020.
7. Wei, X., Jiang, S., Li, Y., Li, C., Jia, L., & Li, Y., “Defect detection of pantograph slide based on deep learning and image processing technology.”, IEEE Transactions on Intelligent Transportation Systems, vol. 21(3), pp. 947-958, 2019.
8. Cha, Y. J., Choi, W., & Büyüköztürk, O., “Deep learning‐based crack damage detection using convolutional neural networks.”, Computer‐Aided Civil and Infrastructure Engineering, vol. 32(5), pp. 361-378, 2017.
9. Baur, C., Wiestler, B., Albarqouni, S., & Navab, N., “Deep autoencoding models for unsupervised anomaly segmentation in brain MR images.”, In International MICCAI Brainlesion Workshop, pp. 161-169, Springer, 2018.
10. Davletshina, D., Melnychuk, V., Tran, V., Singla, H., Berrendorf, M., Faerman, E., Fromm, M., & Schubert, M., “Unsupervised anomaly detection for x-ray images.”, arXiv preprint arXiv:2001.10883, 2020.
11. Saligrama, V., & Chen, Z., “Video anomaly detection based on local statistical aggregates.”, In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2112-2119, IEEE, 2012.
12. Zhou, J. T., Du, J., Zhu, H., Peng, X., Liu, Y., & Goh, R. S. M., “AnomalyNet: An anomaly detection network for video surveillance.”, IEEE Transactions on Information Forensics and Security, vol. 14(10), pp. 2537-2550, 2019.
13. Valdes, A., & Cheung, S., “Communication pattern anomaly detection in process control systems.”, In 2009 IEEE Conference on Technologies for Homeland Security, pp. 22-29, IEEE, 2009.
14. Ten, C. W., Hong, J., & Liu, C. C., “Anomaly detection for cybersecurity of the substations.”, IEEE Transactions on Smart Grid, vol. 2(4), pp. 865-873, 2011.
15. Gaus, Y. F. A., Bhowmik, N., Akçay, S., Guillén-Garcia, P. M., Barker, J. W., & Breckon, T. P., “Evaluation of a dual convolutional neural network architecture for object-wise anomaly detection in cluttered X-ray security imagery.”, In 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1-8, IEEE, 2019.
16. Akçay, S., Atapour-Abarghouei, A., & Breckon, T. P., “Skip-ganomaly: Skip connected and adversarially trained encoder-decoder anomaly detection.”, In 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1-8, IEEE, 2019.
17. Chandola, V., Banerjee, A., & Kumar, V., “Anomaly detection: A survey.”, ACM computing surveys (CSUR), vol. 41(3), pp. 1-58, 2009.
18. Sakurada, M., & Yairi, T., “Anomaly detection using autoencoders with nonlinear dimensionality reduction.”, In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis, pp. 4-11, 2014
19. Chen, Z., Yeo, C. K., Lee, B. S., & Lau, C. T., “Autoencoder-based network anomaly detection.”, In 2018 Wireless Telecommunications Symposium (WTS), pp. 1-5, IEEE.
20. Chow, J. K., Su, Z., Wu, J., Tan, P. S., Mao, X., & Wang, Y. H., “Anomaly detection of defects on concrete structures with the convolutional autoencoder.”, Advanced Engineering Informatics, vol. 45, 2020.
21. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A.C., & Bengio, Y., “Generative adversarial networks.”, arXiv preprint arXiv:1406.2661, 2014.
22. Schlegl, T., Seeböck, P., Waldstein, S. M., Schmidt-Erfurth, U., & Langs, G., “Unsupervised anomaly detection with generative adversarial networks to guide marker discovery.”, In International conference on information processing in medical imaging, pp. 146-157, Springer, Cham, 2017.
23. Akcay, S., Atapour-Abarghouei, A., & Breckon, T. P., “Ganomaly: Semi-supervised anomaly detection via adversarial training.”, In Asian conference on computer vision, pp. 622-637, Springer, Cham, 2018.
24. Hinton, G. E., & Salakhutdinov, R. R., “Reducing the dimensionality of data with neural networks.”, science, vol. 313(5786), no. 504-507, 2006.
25. Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A., “Extracting and composing robust features with denoising autoencoders.”, In Proceedings of the 25th international conference on Machine learning, pp. 1096-1103, 2008.
26. Lee, H., Battle, A., Raina, R., & Ng, A. Y., “Efficient sparse coding algorithms.”, In Advances in neural information processing systems, pp. 801-808, 2007.
27. Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., & Frey, B., “Adversarial autoencoders.”, arXiv preprint arXiv:1511.05644, 2015.
28. Radford, A., Metz, L., & Chintala, S., “Unsupervised representation learning with deep convolutional generative adversarial networks.”, arXiv preprint arXiv:1511.06434, 2015.
29. Gowda, S. N., & Yuan, C., “ColorNet: Investigating the importance of color spaces for image classification.”, In Asian Conference on Computer Vision, pp. 581-596, Springer, Cham, 2018.
30. Chaudhary, P., Chaudhari, A. K., Cheeran, A. N., & Godara, S., “Color transform based approach for disease spot detection on plant leaf.”, International journal of computer science and telecommunications, vol. 3(6), pp. 65-70, 2012.
31. Schanda, J., Colorimetry: understanding the CIE system. John Wiley & Sons, 2007.
32. Hill, B., Roger, T., & Vorhagen, F. W., “Comparative analysis of the quantization of color spaces on the basis of the CIELAB color-difference formula.”, ACM Transactions on Graphics (TOG), vol. 16(2), pp. 109-154, 1997.
指導教授 栗永徽(Yung-Hui Li) 審核日期 2021-8-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明