博碩士論文 108827025 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:30 、訪客IP:18.226.82.32
姓名 吳尚軒(WU SHANG XUAN)  查詢紙本館藏   畢業系所 生物醫學工程研究所
論文名稱 使用YOLO辨識金屬表面瑕疵
(Defect Detection of Metal Surfaces Using YOLO Technique)
相關論文
★ 基於密度泛函理論的人體姿勢模態識別之非監督學習方法★ 舌紋分析的動態曝光方法
★ 整合Modbus與Websocket協定之聯網醫療資料採集嵌入式系統研製★ 比較 U-net 神經網路與資料密度泛函方法對於磁共振影像分割的效能
★ 使用YOLO架構在標準環境中進行動態舌頭影像偵測及切割★ 使用深度學習結合快速資 料密度泛函轉換進行自動腦瘤切割
★ 使用強化學習模擬抑制新冠肺炎疫情★ 融合影像與加速度感測訊號的人體上部運動特徵視覺化之機械學習模型
★ 組建細胞培養人造磁場微實驗平台★ 標準CMOS製程之新型微機電麥克風驗證、濕式蝕刻加工製程開發暨量產製程研究
★ 靜磁場於癌細胞的生物效應★ 關節角度監測裝置應用在日常膝關節活動
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 台灣是從製造業起家的國家;在各類技術如此進步的21世紀中無論工廠流水線如何演化,其產品良率一直都是必須解決的重點問題。將不良品檢測出來並銷毀需要一定的人力及成本,如果將不良品送至廠商或是送至消費者手上無疑會造成更大的損失,最嚴重更可能導致人員傷亡。傳統製造業中,會雇用大量的人力來確保產品的品質,但由於員工精力無法時刻皆保持穩定因此可能導致不良的產品流出,這無疑造成重大的損失。現今的高科技產業主要使用AOI(Automated Optical Inspection)來針對產品進行瑕疵檢測,而所謂的AOI是透過光學儀器以高速度且高精確度的模式,利用人工智慧機械視覺技術來達成檢測目標。不過除了AOI以外,本研究亦嘗試使用YOLO技術做為產品檢測的方法。技術基礎上,YOLO是透過卷積神經網路來進行學習的,將大量的圖像資料經過卷積以後由全連接層進行輸出來辨別產品的缺陷。技術比較上,AOI依賴於使用人為設定好的參數去進行辨識,而YOLO則是利用學習獲得的模型進行辨識。如此可推斷AOI比較適合短期的生產項目,而YOLO則可以利用於長期生產的產品。因為雖然YOLO需要大量的資料來進行訓練獲得模型,不過訓練好的YOLO模型便可以透過不斷的學習來提升分辨率,這就是只能依賴人工調整參數的AOI技術望塵莫及之處了。總結來說現今的製造業動輒生產數量龐大的訂單,在使用深度學習的YOLO時會更容易展現出其長處,而AOI則比較適合用於檢測數量較少的訂單,並需要人工調整參數,短期內AOI的分辨率會比較高。而本研究之目的就是利用YOLO技術的特點來針對醫療產品硬體部分進行主要的瑕疵檢測。
摘要(英) The manufacturing industry is the economic foundation of Taiwan. The development of every technology is so impressive in the 21st century as we should see. Even though the performance of product lines is getting improved, the yield rate of products is still the main issue. The mission of defect detection among products would cost significant labor sources and work time. The products with defects would cause the manufacturers or consumers further losses. Those products would probably result in injuries or even deaths in some situations. In traditional manufacturing, employers would like to employ a large amount of labor intervention to ensure the quality of products. However, because employees′ energy is not always stable, it may cause the outflow of defective products, resulting in significant losses. High-tech industries currently use AOI (Automated Optical Inspection) to detect defects in products. AOI is achieved by high-speed and high-precision optical instruments, along with artificial intelligence of machine vision technology. In this research, we used the YOLO technique to develop a product detection method. The YOLO framework uses convolutional neural networks to learn how to identify defective products. The convolutional kernels deal with input images, and then the fully connected layer outputs the predicted out classified results. The AOI technique technically relies on the pre-set parameters for identification, whereas YOLO uses a well-trained model obtained by learning from data. The YOLO technique needs a large number of data to train its model, and then the well-trained model can continuously improve the ability of identification through sustained learnings. That is the reason that participants believe the YOLO technique outperforms the AOI. In conclusion, it is easier for YOLO to show its strengths in massive production lines, while AOI is suitable for smaller production lines. The purpose of this research thesis is to develop a defect detection method by utilizing the YOLO technique. Hopefully, we can extend the developed model to applications on medical products.
關鍵字(中) ★ 影像辨識
★ 機械學習
關鍵字(英) ★ Image recognition
★ Machine learning
論文目次 中文摘要 i
英文摘要 ii
致謝 iii
目錄 iv 表目錄 viii
圖目錄 vi
一、緒論 1
1-1 人工智慧 1
1-2 影像辨識 2
1-3 本論文之焦點與未來展望 3
二、YOLOv4架構圖 4
2-1 YOLOv4架構介紹 4
2-2 輸入端 6
2-3 BackBone 8
2-4 Neck 10
2-5 Prediction 11
三、研究內容與方法 14
3-1 資料來源與研究方法 15
3-2 NEU surface defect database 15
3-2-1 Patches 15
3-2-2 Scratches 15
3-3 Aldea AOI瑕疵分類database 16
3-3-1 Normal 16
3-3-2 Void 16
3-3-3 Vertical defect 16
3-3-4 Particle 17
3-3-5 Horizontal defect 17
3-3-6 Edge defect 18
四、結果與分析 19
五、結論 30
參考文獻 31
參考文獻 [1] Solomonoff, R.J.The Time Scale of Artificial Intelligence; Reflections on Social Effects, Human Systems Management, Vol 5 1985, Pp 149-153.
[2] O′Regan, G. (2018). EDVAC and ENIAC Computers.
[3] Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.
[4] Shapiro, Ehud Y. "The fifth generation project—a trip report." Communications of the ACM 26.9 (1983): 637-641.
[5] Goldberg, David E.; Holland, John H. (1988). "Genetic algorithms and machine learning" (PDF). Machine Learning. 3 (2): 95–99. doi:10.1007/bf00113892.
[6] Lewis, Michael B; Ellis, Hadyn D (2003), "How we detect a face: A survey of psychological evidence", International Journal of Imaging Systems and Technology, 13: 3–7, doi:10.1002/ima.10040, S2CID 14976176.
[7] M. Dikmen and C. Burns, "Trust in autonomous vehicles: The case of Tesla Autopilot and Summon," 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2017, pp. 1093-1098, doi: 10.1109/SMC.2017.8122757.
[8] Rossion, Bruno; Hanseeuw, Bernard; Dricot, Laurence. Defining face perception areas in the human brain: A large-scale factorial fMRI face localizer analysis. Brain and Cognition. 2012-7, 79 (2): 138–157 [2019-09-13]. doi:10.1016/j.bandc.2012.01.001.
[9] Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60, 84 - 90.
[10] Bergstra, J., Bastien, F., Breuleux, O., Lamblin, P., Pascanu, R., Delalleau, O., Desjardins, G., Warde-Farley, D., Goodfellow, I., Bergeron, A., & Bengio, Y. (2012). Theano: Deep Learning on GPUs with Python.
[11] Robbins, Martin. Does an AI need to make love to Rembrandt′s girlfriend to make art?. The Guardian. 2016-05-06 [2016-06-22].
[12] Redmon, Joseph et al. “You Only Look Once: Unified, Real-Time Object Detection.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016): 779-788.
[13] Krizhevsky, A., Ilya Sutskever and Geoffrey E. Hinton. “ImageNet classification with deep convolutional neural networks.” Communications of the ACM 60 (2012): 84 - 90.
[14] Redmon, Joseph and Ali Farhadi. “YOLO9000: Better, Faster, Stronger.” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): 6517-6525.
[15] Redmon, Joseph and Ali Farhadi. “YOLOv3: An Incremental Improvement.” ArXiv abs/1804.02767 (2018): n. pag.
[16] Lin, Tsung-Yi, Piotr Dollár, Ross B. Girshick, Kaiming He, Bharath Hariharan and Serge J. Belongie. “Feature Pyramid Networks for Object Detection.” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): 936-944.
[17] Bochkovskiy, Alexey, Chien-Yao Wang and H. Liao. “YOLOv4: Optimal Speed and Accuracy of Object Detection.” ArXiv abs/2004.10934 (2020): n. pag.
[18] Zheng, Zhaohui, P. Wang, Wei Liu, Jinze Li, Rongguang Ye and Dongwei Ren. “Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression.” ArXiv abs/1911.08287 (2020): n. pag.
[19] Schmidhuber, J.. “Deep learning in neural networks: An overview.” Neural networks : the official journal of the International Neural Network Society 61 (2015): 85-117.
[20] https://aidea-web.tw/topic/285ef3be-44eb-43dd-85cc-f0388bf85ea4(2021).
[21] Wang, Chien-Yao, Alexey Bochkovskiy and H. Liao. “Scaled-YOLOv4: Scaling Cross Stage Partial Network.” ArXiv abs/2011.08036 (2020): n. pag.
[22] He, Kaiming, X. Zhang, Shaoqing Ren and Jian Sun. “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.” IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (2015): 1904-1916.
[23] Misra, Diganta. “Mish: A Self Regularized Non-Monotonic Activation Function.” BMVC (2020).
[24] Orchard, Jeff and Craig S. Kaplan. “Cut-out image mosaics.” NPAR (2008).
[25] Singh, Krishna Kumar, Hao Yu, Aron Sarmasi, G. Pradeep and Yong Jae Lee. “Hide-and-Seek: A Data Augmentation Technique for Weakly-Supervised Localization and Beyond.” ArXiv abs/1811.02545 (2018): n. pag.
[26] Chen, Pengguang, Shu Liu, Hengshuang Zhao and Jiaya Jia. “GridMask Data Augmentation.” ArXiv abs/2001.04086 (2020): n. pag.
[27] Yun, Sangdoo, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe and Y. Yoo. “CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features.” 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (2019): 6022-6031.
[28] Dubey, A. and Vanita Jain. “Comparative Study of Convolution Neural Network’s Relu and Leaky-Relu Activation Functions.” (2019).
[29] Ghiasi, Golnaz, Tsung-Yi Lin and Quoc V. Le. “DropBlock: A regularization method for convolutional networks.” NeurIPS (2018).
[30] Liu, Shu, Lu Qi, Haifang Qin, J. Shi and J. Jia. “Path Aggregation Network for Instance Segmentation.” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): 8759-8768.
[31] Rezatofighi, S. H., Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, I. Reid and S. Savarese. “Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019): 658-666.
[32] http://faculty.neu.edu.cn/yunhyan/NEU_surface_defect_database.html(2021).
[33] Redmon, J., & Farhadi, A. (2018). YOLOv3: An Incremental Improvement. ArXiv, abs/1804.02767.
指導教授 陳健章(CHEN JIAN ZHANG) 審核日期 2021-8-11
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明