博碩士論文 107552022 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:26 、訪客IP:3.137.172.68
姓名 張軒豪(Shiuan-Hau Jhang)  查詢紙本館藏   畢業系所 資訊工程學系在職專班
論文名稱 以卷積神經網路做聚酯膠片的瑕疵分類
(Classification of the defects on polyester films using convolutional neural network)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 聚酯膠片是一種人工合成塑膠,可用在多項產品的外包裝上;例如,器材包裝、食品包裝、貨運包裝、藥品包裝、電子元件包裝等,聚酯膠片除了多元化應用之外,也需要兼顧產品品質,傳統上聚酯膠片在生產過程是使用人眼觀察檢查,但是聚酯膠片生產過程是一直持續移動前進的,對於移動的生產過程使用人工檢查上產生了很多不確定因素;例如,眼睛疲勞、新人不熟悉、移動中判斷瑕疵大小困難等。在本論文中我們基於卷積神經網路 (convolution neural network, CNN) 提出了一套聚酯膠片的瑕疵分類系統,此系統會先使用原始網路模型,並評估後改進卷積神經網路;例如,縮減模型,擴增模型、增加訓練次數、錯誤分析等,而後再利用SinGAN彌補資料樣本數不足造成資料不平衡的情況,使得系統對於分類聚酯膠片的瑕疵;例如,氣泡、破洞、晶點、蚊蟲、碳化物、噴料、缺料偏薄、積料過厚,能有更好的精確率、召回率、準確率,因此這套系統可以用於產品檢測上,替代人工視覺檢查,保持產品品質及穩定性。
本論文分為五部分:第一部分是使用VGG16、GoogLeNet及DenseNet三個原始網路模型進行訓練,三個模型分類成績最好的會在進行下一步。第二部分是縮減模型,在使用DenseNet為基礎下進行有條件的限制,目的是為了詳細了解聚酯膠片在訓練分類過程中,低階特徵、高階特徵還是模型深度是影響模型準確率的關鍵,結果是最佳化的修改方向。第三部分是擴增模型,針對卷積神經網路加深層數,以改善分類結果。第四部分是用SinGAN生成瑕疵樣本,為了避免在瑕疵樣本數太少的情況下訓練模型會發生過度擬合的情況,以保持高準確率。第五部份是增加訓練次數及錯誤分析,增加模型穩定性,並檢查是否原瑕疵樣本就發生了人工歸類錯誤,以提升分類的精確率、召回率及準確率。
在實驗中,我們蒐集聚酯膠片生產時所產生的瑕疵影像,瑕疵種類共有八類,包含碳化物:581張、氣泡:428張、蚊蟲:32張、噴料:234張、缺料偏薄:98張、破洞:28張、積料過厚:430張及晶點:460張,總共2291張,影像解析度皆為224×224,在未使用SinGAN生成瑕疵樣本且資料增強後,訓練樣本是6414張,測試樣本是2750張,總數是9164張,VGG16準確率為88.84%、GoogLeNet準確率為69.93%、DenseNet準確率為91.38%。而在使用SinGAN生成瑕疵樣本,蚊蟲變成266張且資料增強後,訓練樣本是7070張,測試樣本是3030張,總數是10100張,使用SinGAN後準確率為96.57%,提升了5.19%。而在改進後的DenseNet準確率為100%,比起原始DenseNet準確率提升了8.62%,執行速度每秒處理275張,模型大小是45.5MB。
摘要(英) Polyester films is a kind of synthetic plastic that can be used in the outer packaging of many products; for example, equipment packaging, food packaging, freight packaging, pharmaceutical packaging, electronic component packaging, etc. In addition to diversified applications, polyester film also needs to be considered. The quality of the products is traditionally inspected by human eyes during the production process of polyester film. However, the production process of polyester film has been continuously moving forward. There are many uncertain factors in the manual inspection of the mobile production process; for example, eyes Fatigue, unfamiliar newcomers, difficulty in judging the size of flaws while moving, etc. In this research, we propose a polyester films defect classification system based on convolution neural network. This system will first use the original network model and improve the convolutional neural network after evaluation. For example, reduce the model, expand the model, increase the number of training, error analysis, etc., and then use SinGAN to make up for the insufficient number of data samples to cause the imbalance of the data, so that the system has defects in the classification of polyester films; for example, bubble, hole, crystal, mosquito, lack, spray, carbon, and hoard, can have better precision, recall, and accuracy. Therefore, this system can be used for product inspection instead of artificial vision check to maintain product quality and stability.
This paper is divided into five parts: The first part is to use the three original network models of VGG16, GoogLeNet and DenseNet for training. The three models with the best classification results will proceed to the next step. The second part is to reduce the model. Conditional restrictions are carried out based on the use of DenseNet. The purpose is to understand in detail in the classification process of polyester films in the convolutional neural network, low-level features, high-level features or model depth affect the model the key to accuracy, the result is the optimal modification direction. The third part is the augmentation model, which deepens the number of layers for the convolutional neural network to improve the classification results. The fourth part is to use SinGAN to generate flawed samples, in order to avoid overfitting of the training model when the number of flawed samples is too small to maintain high accuracy. The fifth part is to increase the number of training and error analysis, increase the stability of the model, and check whether the original defective sample has manual classification errors, so as to improve the accuracy, recall and accuracy of classification.
In the experiment, we collected images of defects produced during the production of polyester film. There are eight types of defects, including carbon: 581 sheets, bubble: 428 sheets, mosquito: 32 sheets, spray: 234 sheets, lack: 98 sheets, holes: 28 sheets, hoard: 430 sheets, and crystal: 460 sheets, total is 2291 sheets, and the image resolution is 224×224. After SinGAN is not used to generate defect samples and the data is enhanced, the training sample is 6414 sheets, the test sample is 2750 sheets, total is 9164 sheets, the accuracy rate of VGG16 is 88.84%, the accuracy rate of GoogLeNet is 69.93%, and the accuracy rate of DenseNet is 91.38%. While using SinGAN to generate flawed samples, the mosquito data is enhance to 266 sheets, the training samples are 7070, the test samples are 3030, and the total is 10100. After using SinGAN, the accuracy rate is 96.57%, an increase of 5.19%. The improved DenseNet accuracy rate is 100%, which is 8.62% higher than the original DenseNet accuracy rate. The execution speed is 275 frames per second, and the model size is 45.5MB.
關鍵字(中) ★ 聚酯膠片
★ 卷積神經網路
★ 瑕疵分類
關鍵字(英)
論文目次 目錄 i
圖目錄 iii
表目錄 v
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 3
1.3 網路模式的選擇與改進 3
1.4 論文特色 5
1.5 論文架構 6
第二章 相關研究 7
2.1 類神經網路 7
2.1.1 類神經網路介紹 7
2.1.2 倒傳遞神經網路 9
2.1.3 激活函數 10
2.1.4 學習率 11
2.2 卷積神經網路之物件分類 12
2.3 卷積神經網路之物件偵測 18
2.4 卷積神經網路的輕量化 23
2.5 生成對抗網路 31
2.5.1 深度卷積生成對抗網路 32
2.5.2 SinGAN 33
第三章 聚酯膠片的瑕疵分類 34
3.1 資料集 34
3.2 資料前處理 35
3.3 原始網路模型 36
3.2.1 VGGnet模型 36
3.2.2 GoogLeNet模型 37
3.2.3 DenseNet模型 38
3.2.4 模型比較 39
3.4 縮減模型 39
3.5 擴增模型 40
3.6 SinGAN生成瑕疵樣本 41
第四章 實驗結果 43
4.1 實驗設備與環境 43
4.2 比較與評估 43
4.3 實驗結果與分析 44
4.3.1 原始網路模型的比較 45
4.3.2 縮減模型的比較 49
4.3.3 擴增模型的比較 50
4.3.4 SinGAN生成樣本圖片模型的比較 51
4.3.5 分類錯誤項目分析與增加訓練次數模型 53
4.3.6 最後整體結果 56
第五章 結論與未來工作 60
參考文獻 62
參考文獻 [1] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv:1409.1556.
[2] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," in Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Jun.7-12, 2015, pp.1-9.
[3] G. Huang, Z. Liu, L. V. D. Maaten and K. Q. Weinberger, ′′Densely connected convolutional networks,′′ in Proc. IEEE Conf. on Pattern Recognition and Computer Vision (CVPR), Honolulu, Hawaii, Jul.22-25, 2017, pp.4700-4708.
[4] A. Krizhevsky, L. Sutskever and G. E.Hinton, "ImageNet classification with deep convolutional neural networks," in NIPS′12: Proc. of the 25th Int. Conf. on Neural Information Processing Systems - Volume 1, Lake Tahoe, Nevada, Dec.3-6, 2012, pp.1097-1105.
[5] M. Zeiler and R.Fergus, "Visualizing and understanding convolutional networks," in Proc. European Conf. on Computer Vision (ECCV), Zürich, Switzerland, Sep.6-12, 2014, pp.818-833.
[6] K. He, X. Zhang, S. Ren, J. Sun, "Deep residual learning for image recognition ," in Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, Jun.27-30, 2016, pp.770-778.
[7] J. Hu, L. Shen, G. Sun, "Squeeze-and-excitation networks," in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, Jun.18-23, 2018, pp.7132-7141.
[8] M.D. Bloice, C. Stocker, and A. Holzinger, "Augmentor: An Image Augmentation Library for Machine Learning," arXiv: 1708.04680.
[9] Z. Zhang, M. Sabuncu, "Generalized cross entropy loss for training deep neural networks with noisy labels," arXiv:1805.07836.
[10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Columbus, Ohio, Jun.23-28, 2014, pp.580-587.
[11] J. Uijlings, K. Sande, T. Gevers, and A. Smeulders, "Selective search for object recognition," Int. Journal of Computer Vision (IJCV), vol.104, is.2, pp.154-171, 2013.
[12] L. Andreone, F. Bellotti, A. D. Gloria, and R. Lauletta, ′′SVM-based pedestrian recognition on near-infrared images,′′ in Proc. 4th IEEE Int. Symp. on Image and Signal Processing and Analysis, Torino, Italy, Sep.15-17, 2005, pp.274-278.
[13] R. Girshick, "Fast R-CNN," in Proc. of IEEE Int. Conf. on Computer Vision (ICCV), Santiago, Chile, Dec.11-18, 2015, pp.1440-1448.
[14] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards real- time object detection with region proposal networks," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.39, is.6, pp.1137-1149, 2016.
[15] K. He, X. Zhang, S. Ren, and J. Sun, ′′Spatial pyramid pooling in deep convolutional networks for visual recognition,′′ IEEE Trans. Pattern Analysis and Machine Intelligence, vol.37, is.9, pp.1904-1916, 2015.
[16] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask R-CNN," in Proc. IEEE Int. Conf. on Computer Vision (ICCV), Venice, Italy, Oct.22-29, 2017, pp. 2980-2988.
[17] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A.C. Berg, "SSD: Single shot multi-box detector," in Proc. European Conf. on Computer Vision (ECCV), Amsterdam, Holland, Oct.8-16, 2016, pp.21- 37.
[18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: unified, real-time object detection," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp.779- 788.
[19] S. Han, H. Mao, and W. J. Dally, "Deep compression:compressing deep neural networks with pruning trained quantization and huffman coding," in Proc. Int. Conf. Learn. Represent (ICLR), San Juan, Puerto Rico, May.2-4, 2016, pp.1-14.
[20] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, "Xnornet: imagenet classification using binary convolutional neural networks," in Proc. European Conf. on Computer Vision (ECCV), Amsterdam, Netherlands, Oct.11-14, 2016, pp.525-542.
[21] M. Lin, Q. Chen, and S. Yan, ′′netwok in network,′′ in Proc. Int. Conf. Learn. Represent (ICLR), Banff, Canada, Apr.14-16, 2014, pp.274-278.
[22] X. Zhang, X. Zhou, M. Lin, and J. Sun, ′′ShuffleNet: an extremely efficient convolutional neural network for mobile devices,′′ arXiv:1707.01083.
[23] N. Ma, X. Zhang, H. Zheng, and J. Sun, ′′ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,′′ arXiv: 1807.11164.
[24] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, ′′Mobilenets: efficient convolutional neural networks for mobile vision applications,′′ arXiv:1704.04861.
[25] G. Hinton, O. Vinyals, and J. Dean, ′′Distilling the knowledge in a neural network,′′ arXiv: 1503.02531.
[26] F. Chollet, ′′Xception: deep learning with depthwise deparable convolutions,′′ in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition(CVPR), Honolulu, Hawaii, Jul.22-25, 2017, pp.1800-1807.
[27] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, ′′Generative adversarial nets,′′ arXiv:1406.2661.
[28] A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv:1511.06434.
[29] T. R. Shaham, T. Dekel, and T. Michaeli, "SinGAN: Learning a generative model from a single natural image," arXiv:1905.01164.
[30] X. Shen, Y. Chen, X. Tao, and J. Jia, "Convolutional neural pyramid for image processing," arXiv: 1704.02071.
指導教授 曾定章 審核日期 2020-7-29
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明