博碩士論文 104553004 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:98 、訪客IP:3.135.207.201
姓名 房彥呈(Yen Cheng Fang)  查詢紙本館藏   畢業系所 通訊工程學系在職專班
論文名稱 基於Faster R-CNN應用於 金屬零件表面之瑕疵檢測優化
(Enhanced Defect Detection on Metal Component Surfaces Utilizing Faster R-CNN)
相關論文
★ 利用手持式手機工具優化行動網路系統於特殊型活動環境★ 穿戴裝置動態軌跡曲線演算法設計
★ 石英諧振器之電極面設計對振盪頻率擾動之溫度相依性研究★ 股票開盤價漲跌預測
★ 感知無線電異質網路下以不完美頻譜偵測進行資源配置之探討★ 大數量且有限天線之多輸入多輸出系統效能分析
★ 具有元學習分類權重轉移網路生成遮罩於少樣本圖像分割技術★ 具有注意力機制之隱式表示於影像重建 三維人體模型
★ 使用對抗式圖形神經網路之物件偵測張榮★ 基於弱監督式學習可變形模型之三維人臉重建
★ 以非監督式表徵分離學習之邊緣運算裝置低延遲樂曲中人聲轉換架構★ 基於序列至序列模型之 FMCW雷達估計人體姿勢
★ 基於多層次注意力機制之單目相機語意場景補全技術★ 應用於3GPP WCDMA-FDD上傳鏈路系統的遞迴最小平方波束合成犛耙式接收機
★ 調適性遠時程瑞雷衰退通道預測演算法設計與性能比較★ 智慧型天線之複合式到達方位-時間延遲估測演算法及Geo-location應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-5-4以後開放)
摘要(中) 在工業4.0的時代,大量且快速生產的節奏下,為了提升製程的品質必須要進行有效率且精準的檢測。

傳統是以人工目視檢測進行瑕疵的判定,這不僅耗時且誤判率也可能隨人員的精神狀態而起伏,再加上以人目視檢測的誤判率也因金屬零件複雜程度增加而有上升的趨勢
,使得製造業者開始導入自動光學檢測系統取代傳統人力作業;然而現今以影像辨識為主的自動光學檢測系統為了達到接近百分之百檢出率,往往會造成高誤判率,使得被機器挑出的候選影像大多是假缺陷。

因此本研究嘗試應用以卷積神經網路的方法加強金屬零件表面瑕疵檢測,目的是降低自動光學系統造成的高誤判率現況,希望能藉卷積神經網路對圖像分類領域所帶來突破性的研究成果也一併套用至檢測技術領域上,本研究以Faster R-CNN作為主架構,原始模型的整體指標(mAP)為 89.720,經過使用COCO (ResNet-50)預訓練模型,再透過轉移學習把已經訓練好的模型、參數,共享至另外一個新的模型上,進而達到優化Faster R-CNN演算法的目的,優化過後的整體指標(mAP)可達到 94.223,提高了 5% 左右。
摘要(英) Amidst the rhythm of rapid and large-scale production in the era of Industry 4.0, effective and precise inspection is pivotal to boost process quality. Traditionally, defect detection hinged on manual visual inspection, which proved not only time-consuming but also vulnerable to fluctuating misjudgment rates due to the inspector′s mental condition. Moreover, with an escalating complexity of metal components, the misjudgment rates of human visual inspection have an increasing tendency, leading manufacturers to adopt automated optical inspection systems in lieu of conventional manual labor. However, present automated optical inspection systems, primarily driven by image recognition, often incur high misjudgment rates in the pursuit of near-perfect detection rates, resulting in a majority of machine-selected candidate images being false defects.

This study attempts to apply convolutional neural network techniques to enhance the surface defect detection of metal components, aiming to alleviate the high misjudgment rates caused by automated optical systems. It aspires to adopt the revolutionary outcomes from the field of image classification driven by convolutional neural networks into the domain of detection technology. The Faster R-CNN model serves as the main architecture for this research. The overall Mean Average Precision (mAP) of the original model is 89.720. By employing the COCO (ResNet-50) pre-trained model and subsequent transfer learning to share the well-trained model and its parameters to a new model, the aim is to optimize the Faster R-CNN algorithm. Consequently, the overall mAP of the optimized model achieves 94.223, marking an approximate improvement of 5%.
關鍵字(中) ★ 瑕疵檢測
★ 物件偵測
★ 轉移學習
關鍵字(英) ★ Faster R-CNN
★ Object Detection
★ Transfer Learning
論文目次 摘要 I
Abstract II
誌謝 III
目錄 IV
圖目錄 VI
表目錄 VIII
第一章、緒論 1
1.1 研究背景 1
1.2 研究動機與目的 2
1.3 章節概述 3
第二章、相關研究 4
2.1 文獻探討 4
2.2 神經網路 5
2.2.1 類神經網路 5
2.2.2 卷積神經網路 7
2.2.3 經典模型 7
2.3 目標檢測演算法 10
2.3.1 R-CNN 10
2.3.2 Fast R-CNN 11
2.3.3 Faster R-CNN 12
2.3.4 各模型檢測方法比較 13
第三章、實驗方法 14
3.1 模型訓練流程 14
3.2 數據預處理 15
3.2.1 類別標記 16
3.2.2 缺陷標註 17
3.3 資料增強 19
3.4 數據分割 20
3.5 設置超參數 20
3.6 選擇最佳模型 21
3.6.1 轉移學習 22
3.7 模型應用 23
3.7.1 評估指標 23
第四章、研究與實驗 25
4.1 實驗環境 25
4.2 模型超參數設定 26
4.3 數據集分配比例實驗 (Model-1) 26
4.4 滑痕瑕疵分類實驗 (Model-2) 30
4.5 調整資料分配實驗 (Model-3) 32
4.6 結合Model-2 & Model-3實驗 (Model-4) 35
4.7 透過轉移學習優化Faster R-CNN (Model-5) 37
4.8 Model-1至5的實驗結果 39
第五章、實驗結論 40
5.1 結論 40
參考文獻 [1] V. Riffo and D. Mery, “Automated Detection of Threat Objects Using Adapted Implicit Shape Model, ” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2015, Vol. 46, No. 4, pp. 472-482.

[2] X. Li, C. Liu, S. Dai, H. Lian and G. Ding, “Scale Specified Single Shot Multibox Detector, ” IET Computer Vision, 2020, Vol. 14, No. 2, pp. 59-64.

[3] D. T. Nguyen, T. N. Nguyen, H. Kim and H. Lee, “A High-Throughput and Power-Efficient FPGA Implementation of YOLO CNN for Object Detection, ” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2019, Vol. 27, No. 8, pp. 1861-1873.

[4] K. Han, M. Sun, X. Zhou, G. Zhang, H. Dang and Z. Liu, "A new method in wheel hub surface defect detection: Object detection algorithm based on deep learning," 2017 International Conference on Advanced Mechatronic Systems (ICAMechS), Xiamen, China, 2017, pp. 335-338, doi: 10.1109/ICAMechS.2017.8316494.

[5] S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 1 June 2017, doi: 10.1109/TPAMI.2016.2577031.

[6] 北美智權報第285期 ,深度學習神經網路之運作 ,芮嘉瑋 ,2021年5月26日

[7] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998, doi: 10.1109/5.726791.

[8] Krizhevsky, A., I. Sutskever and G. E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 1097-1105.

[9] K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in CoRR, abs/1409.1556, 2014.

[10] He, K., X. Zhang, S. Ren and J. Sun. 2016. Deep residual learning for image recognition. In "2016 IEEE Conference on Computer.

[11] R. Girshick, J. Donahue, T. Darrell and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 580-587, doi: 10.1109/CVPR.2014.81.

[12] R. Girshick, "Fast R-CNN," 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015, pp. 1440-1448, doi: 10.1109/ICCV.2015.169.
指導教授 陳永芳(Yung Fang Chen) 審核日期 2023-7-17
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明