中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86573
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41657321      線上人數 : 1638
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86573


    題名: 深度學習的電子元件之X-ray影像瑕疵分割與分類;X-ray image defect segmentation and classification for electronic components using deep learning
    作者: 吳曉維;Wu, Hsiao-Wei
    貢獻者: 資訊工程學系
    關鍵詞: 深度學習;語意分割;deep learning;semantic segmentation
    日期: 2021-07-29
    上傳時間: 2021-12-07 12:59:10 (UTC+8)
    出版者: 國立中央大學
    摘要: 自動化生產的技術已日漸成熟,但產品還是難免有瑕疵的情況產生。以往需要借助大量人力的人眼視覺檢驗,不僅花費許多時間與人力成本,還可能因人眼的疲憊與分心而造成一些漏檢與過殺。為了達到高效率和高精度的檢測目標,自動光學檢測 (automatic optical inspection, AOI) 技術因應而生,使用自動化的機器取代人工檢測產品的瑕疵。
    在檢測方面,傳統方法面對產品的細小化與精緻化已呈現力有未逮的趨勢。隨著深度學習技術的快速發展,廣泛應用於偵測、辨識、及分割等領域中,且呈現良好的效果,因而逐漸取代傳統方法。對於X-ray影像的瑕疵檢測,使用傳統方法一直都有較高的漏檢問題,雖然使用偵測卷積神經網路 (convolutional neural network, CNN) 能夠減少漏檢,但仍有細小或大面積的瑕疵區塊檢測不完全的情況。因此在本研究中,我們提出以分割卷積神經網路來降低X-ray影像的瑕疵漏檢與過殺的問題。
    本研究的目標是以語意分割 (semantic segmentation) 網路找出影像中不同類別的瑕疵區塊。我們修改UNet++作為主要的分割網路骨幹,修改內容包括:i.將每個編碼與解碼階層的單純卷積層改用殘差區塊代替,解決細小瑕疵區塊分割斷掉的問題;ii.在網路最深層的編碼器與解碼器之間串聯較複雜的通道和位置注意力 (channel and position attention) 模組,讓網路關注更重要的特徵;iii.將多重編碼與解碼分支的損失函數都納入誤差計算,改善了大面積瑕疵區塊漏檢的情況。
    在實驗中,我們共收集了979張電子元件的X-ray瑕疵影像資料,其中訓練集有881張,驗證集有98張,並都將資料擴增為八倍。我們以256×256解析度的影像測試,最終改進的語意分割網路在訓練集的平均聯集分之交集率 (mean intersection over union, MIoU) 及召回率 (recall) 都幾乎維持在99.99%;由於標記標準的不一致,在驗證集的平均聯集分之交集率則是從87.44%提升至89.30%以及召回率從91.92%提升至93.70%。;The technology of automated production has matured day by day, but it is still inevitable that products will have defects. In the past, human visual inspections that required a lot of manpower not only costed a lot of time and labors, but also might cause some missed detection and misjudgment due to fatigue and distraction of the human eyes. In order to achieve the detection target of high efficiency and high precision, the technology of automatic optical inspection (AOI) was born, which uses automated machines to replace manual inspections of product defects.
    In terms of detection, traditional methods have shown a trend of inadequacy in the face of product miniaturization and refinement. With the rapid development of deep learning technology, it is widely used in detection, recognition, and segmentation fields, and the performance is good. Therefore, it gradually replaces traditional methods. Regarding the defect detection of X-ray images, the use of traditional methods has always had high missed detection. Although a detection convolutional neural network (CNN) can reduce missed detection, there are still cases where small or large areas of defects are not fully detected. Therefore, we propose to use a segmentation convolutional neural network to reduce the problem of missed detection and misjudgment of X-ray images in our research.
    Our research aims to use a semantic segmentation network to find different types of defect blocks in the image. We modified UNet++ as the main segmentation network backbone. The modifications include: i. Replace the simple convolutional layer in each encoding and decoding level with residual blocks to solve the segmentation of small defect blocks breaking; ii. Connect in series with more complex channel and position attention modules in the encoder and decoder of the deepest network to focus on more important features; iii. Include each loss function in multiple encoding and decoding branches in the error calculation to improve the missed detection of large-area defect blocks.
    In the experiment, we collected 979 X-ray defect image data for electronic components, of which 881 images were in the training set and 98 images were in the validation set, and augmented all data eight times. We tested the image with 256×256 resolution and used the improved semantic segmentation network finally. In the training set, mean intersection over union (MIoU) and recall both maintained at 99.99% almost. In the validation set, due to inconsistent labeling standards, mean intersection over union increased from 87.44% to 89.30%, and recall increased from 91.92% to 93.70%.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML45檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明