博碩士論文 110522117 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:10 、訪客IP:3.15.12.112
姓名 蔡杰(TSAI CHIEH)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 持續學習與對比學習應用於印刷電路板影像的瑕疵檢測
(Continual learning and contrastive learning for defect inspection on images of printed circuit boards)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著科技的進步,各種電子產品已經成為人們生活中不可或缺的物品,而印刷電路板 (printed circuit board, PCB) 是各種電子產品不可或缺的基本元件,因此印刷電路板的品質攸關所有電子產品的品質。各種電子元件的製造總會出現小量異常,要找出印刷電路板中異常的部位,才能保有優良的電子產品。過去數十年,印刷電路板的異常偵測都是透過傳統自動光學檢測 (automated optical inspection, AOI) 技術來完成;但傳統技術
總難同時降低瑕疵的漏檢率 (mission rate) 與過殺率 (overkill rate)。如今隨著深度學習 (deep learning) 技術的進步,廠商開始將深度學習技術導入自動光學檢測中,以同時提升瑕疵的檢出率與篩除率。
近幾年來,許多持續學習 (continual learning) 方法被應用在分類任務,希望在保有神經網路分類能力的同時,降低模型訓練的成本。因此在本研究中,我們使用持續學習的方法,在每次得到新資料時,不需要將所有的
舊資料與新資料整合後再重新訓練模型,只需要讓模型接著訓練新資料即可,如此訓練資料不會隨著時間不斷累積。我們減少了訓練資料,同時希望網路能夠用少量資料就達到很好的訓練效果,因此我們使用對比學習 (contrastive learning) 方法來訓練模型。我們將訓練資料進行資料擴增以得到更多的正樣本與負樣本,並使用對比損失函數進行模型的訓練。讓
同類別的訓練資料在特徵空間中靠近,不同類別的訓練資料在特徵空間中遠離,以此來增強網路模型的分類能力。
在本研究中,我們的系統架構改自於 Mai 等人的對比學習技術,主要修改內容包含:i. 在測試階段使用最近類別均值分類器 (nearest class
mean, NCM) 時不移除映射層,得到更好的分類效果;ii. 設計一個全連接分類器與一個卷積分類器並達到比 Mai 等人方法中的 NCM 分類器還好
的分類效果。
在實驗中,我們共收集了 115,692 張印刷電路板影像,分成瑕疵與正常兩種類別。訓練集共有 93,178 張,其中有 38,413 張的瑕疵影像與54,765張的正常影像;測試集共有 22,514 張,其中有 9,325 張的瑕疵影像與13,189 張的正常影像。實驗結果顯示,Mai 等人的原始架構之測試資料準確率 (accuracy) 為 94.70%,召回率 (recall) 為 89.89%,精密度 (precision) 為 97.14%;經過本研究修改與設計架構後,最終取得最好的測試資料準確率為 98.51%,召回率為 98.06%,精密度為 98.35% 的成果。
摘要(英) With the advancement of technology, various electronic products have become indispensable items in people’s lives. Printed circuit board (PCB) is a basic component that is essential for all electronic products. Therefore, the quality of PCB is crucial to the quality of all electronic products. Small anomalies always occur during the manufacturing of various electronic components. To maintain high-quality electronic products, it is necessary to
identify the abnormal parts in the PCB. In the past few decades, the abnormal detection of PCB has been completed through traditional automated optical inspection (AOI) technology; however, traditional technology always has
difficulty in reducing both the defect mission rate and overkill rate at the same time. Nowadays, with the advancement of deep learning technology, manufacturers have begun to introduce deep learning technology into AOI to
simultaneously improve the detection rate and precision of defects.
In recent years, many continual learning methods have been applied to classification tasks, hoping to reduce the cost of model training while maintaining the classification ability of neural networks. Therefore, in this study, we used a continual learning method that does not require all old data to be integrated with new data and retrained the model each time new data is obtained. Instead, we simply let the model continue training on new data so that the training data does not accumulate over time. We reduced the amount of
training data and hoped that the network could achieve good training results with a small amount of data. Therefore, we used contrastive learning method to train the model. We perform data augmentation on the training data to obtain
more positive and negative samples and use the contrastive loss function to train the model. This allows training data of the same class to be close together in feature space and training data of different classes to be far apart in feature space, thereby enhancing the network model’s classification ability.
In this study, the architecture of the proposed system is modified from the model of Mai et al. The main modification includes: i. retaining the mapping layer with the NCM (nearest class mean) classifier in the testing phase to
improve the classification performance; ii. designing a fully connected classifier and a convolutional classifier, they all achieve better classification performance than the NCM classifier in Mai’s model.
In the experiment, we collected a total of 115,692 images of PCB, divided into two categories: defective and normal. The training set consisted of 93,178 images, of which 38,413 were defective images and 54,765 were normal images; the testing set consisted of 22,514 images, of which 9,325 were
defective images and 13,189 were normal images. The experimental results show that the accuracy of the testing set of the original Mai’s model is 94.70%, the recall is 89.89%, and the precision is 97.14%. After modifying and
designing the architecture in this study, the best accuracy of the testing set is 98.51%, the recall is 98.06%, and the precision is 98.35%.
關鍵字(中) ★ 持續學習
★ 對比學習
★ 瑕疵檢測
★ 最近類別均值分類器
關鍵字(英)
論文目次 摘要 ii
Abstract iv
致謝 vi
目錄 vii
圖目錄 ix
表目錄 x
第一章 緒論 1
1.1 研究動機與目的 1
1.2 系統架構 2
1.3 論文特色 4
1.4 論文架構 5
第二章 相關研究 6
2.1 持續學習 6
2.2 對比學習 10
2.3 注意力機制 15
第三章 改進的對比式持續學習系統 19
3.1 ResNet網路架構 19
3.2 特徵擷取器 23
3.3 記憶儲存器 25
3.4 資料擴增 29
3.5 映射層 30
3.6 損失函數 31
3.7 分類器 32
第四章 實驗 36
4.1 實驗設備與開發環境 36
4.2 影像分類網路的訓練 36
4.3 評估準則 39
4.4 實驗結果 41
第五章 結論與未來展望 62
參考文獻 63
參考文獻 [1] Z. Mai, R. Li, H. Kim, and S. Sanner, “Supervised contrastive replay: revisiting the nearest class mean classifier in online class-incremental continual learning,” arXiv:2103.13885.
[2] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv:1512.03385.
[3] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep Learning, vol.1, MIT press, Cambridge, UK, 2016.
[4] M. McCloskey and N. J. Cohen, “Catastrophic interference in connectionist networks: the sequential learning problem,” in Psychology of Learning and Motivation, vol. 24, G. H. Bower, Ed., Academic Press, 1989, pp.109-165.
[5] Z. Ke, B. Liu, N. Ma, H. Xu, and L. Shu, “Achieving forgetting prevention and knowledge transfer in continual learning,” arXiv:2112.02706.
[6] M. Mundt, Y. Hong, I. Pliushch, and V. Ramesh, “A wholistic view of continual learning with deep neural networks: forgotten lessons and the bridge to active and open world learning,” arXiv:2009.01797.
[7] M. D. Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars, “A continual learning survey: defying forgetting in classification tasks,” IEEE Trans. Pattern Anal. Mach. Intell., vol.44, no.7, pp.3366-3385, 2021.
[8] G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks: a review,” arXiv:1802.07569v4.
[9] R. Hadsell, D. Rao, A. A. Rusu, and R. Pascanu, “Embracing change: continual learning in deep neural networks,” Trends in Cognitive Sciences, vol.24, no.12, pp.1028-1040, Dec. 2020.
[10] A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. S. Torr, and M. Ranzato, “On tiny episodic memories in continual learning,” arXiv:1902.10486.
[11] M. Riemer, I. Cases, R. Ajemian, M. Liu, I. Rish, Y. Tu, and G. Tesauro, “Learning to learn without forgetting by maximizing transfer and minimizing interference,” arXiv:1810.11910.
[12] D. Lopez-Paz and M. Ranzato, “Gradient episodic memory for continual learning,” arXiv:1706.08840.
[13] S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “iCaRL: incremental classifier and representation learning,” arXiv:1611.07725.
[14] D. Rolnick, A. Ahuja, J. Schwarz, T. P. Lillicrap, and G. Wayne, “Experience replay for continual learning,” arXiv:1811.11682.
[15] G. M. van de Ven, H. T. Siegelmann, and A. S. Tolias, “Brain-inspired replay for continual learning with artificial neural networks,” Nature Communications, vol.11, no.1, pp.1-14, Aug. 2020.
[16] T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohammad, N. Nakashole, E. Platanios, A. Ritterk, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling, “Never-ending learning,” Communications of the ACM, vol.61, no.5, pp.103-115, May 2018.
[17] Z. Li and D. Hoiem, “Learning without forgetting,” arXiv:1606.09282.
[18] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” Proc. Natl. Acad. Sci., vol.114, no.13, pp.3521-3526, Mar. 2017.
[19] X. Liu, M. Masana, L. Herranz, J. Van de Weijer, A. M. Lopez, and A. D. Bagdanov, “Rotate your networks: better weight consolidation and less catastrophic forgetting,” arXiv:1802.02950.
[20] J. Serrà, D. Surís, M. Miron, and A. Karatzoglou, “Overcoming catastrophic forgetting with hard attention to the task,” arXiv:1801.01423.
[21] J. Yoon, E. Yang, J. Lee, and S. J. Hwang, “Lifelong learning with dynamically expandable networks,” arXiv:1708.01547.
[22] A. Mallya and S. Lazebnik, “PackNet: adding multiple tasks to a single network by iterative pruning,” arXiv:1711.05769.
[23] Z. Ke, B. Liu, and X. Huang, “Continual learning of a mixed sequence of similar and dissimilar tasks,” arXiv:2112.10017.
[24] C. Fernando, D. Banarse, C. Blundell, Y. Zwols, D. Ha, A. A. Rusu, A. Pritzel, and D. Wierstra, “PathNet: evolution channels gradient descent in super neural networks,” arXiv:1701.08734.
[25] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv:1503.02531.
[26] A. Jaiswal, A. R. Babu, M. Z. Zadeh, D. Banerjee, and F. Makedon, “A survey on contrastive self-supervised learning,” arXiv:2011.00362.
[27] P. H. Le-Khac, G. Healy, and A. F. Smeaton, “Contrastive representation learning: a framework and review,” arXiv:2010.05113v2.
[28] Y. Tian, D. Krishnan, and P. Isola, “Contrastive multiview coding,” arXiv:1906.05849.
[29] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” arXiv:2002.05709.
[30] O. J. Henaff, A. Srinivas, J. D. Fauw, A. Razavi, C. Doersch, S. M. Ali Eslami, and A. Oord, “Data-efficient image recognition with contrastive predictive coding,” arXiv:1905.09272v3.
[31] R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio, “Learning deep representations by mutual information estimation and maximization,” arXiv:1808.06670.
[32] P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learning,” arXiv:2004.11362.
[33] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: a unified embedding for face recognition and clustering,” arXiv:1503.03832v3.
[34] R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction by learning an invariant mapping,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), New York, NY, Jun.17-22, 2006, pp.1735-1742.
[35] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-excitation networks,” arXiv:1709.01507v4.
[36] S. Woo, J. Park, J.-Y. Lee, and I. Kweon, “CBAM: convolutional block attention module,” arXiv:1807.06521v2.
[37] L. S. Shapley, “A value for n-person games,” in Contributions to The Theory of Games, vol. 2, H. W. Kuhn and A. W. Tucker, Ed., Princeton University Press, 1953, pp.307-318.
[38] R. Jia, D. Dao, B. Wang, F. A. Hubis, N. M. Gurel, B. Li, C. Zhang, C. J. Spanos, and D. Song, “Efficient task-specific data valuation for nearest neighbor algorithms,” arXiv:1908.08619.
[39] D. Shim, Z. Mai, J. Jeong, S. Sanner, H. Kim, and J. Jang, “Online class-incremental continual learning with adversarial Shapley value,” arXiv:2009.00093.
[40] Z. Zhang and M. R. Sabuncu, “Generalized cross entropy loss for training deep neural networks with noisy labels,” in Proc. of Neural Information Processing Systems (NIPS), Palais des Congrès de Montréal, Montréal, Canada, Dec.2-8, 2018, pp.8778-8788.
指導教授 曾定章 審核日期 2023-7-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明