English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41634384      線上人數 : 2642
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89895


    題名: 電子元件的X光影像超級解析度之深度學習系統;X-ray Image Super-resolution for Electronic Components using A Deep Learning System
    作者: 管若嵐;Kuan, Jo-Lan
    貢獻者: 資訊工程學系
    關鍵詞: 影像超級解析度;深度學習;Image-Super resolution;Deep learning
    日期: 2022-08-04
    上傳時間: 2022-10-04 12:03:52 (UTC+8)
    出版者: 國立中央大學
    摘要: 在印刷電路板 (printed circuit board, PCB) 的自動化瑕疵檢測上,會透過X光拍攝印刷電路板上的電子元件,進而透過影像檢測電子元件的瑕疵區塊。為了使X光製圖的效率能再提升,在拍攝電子元件時,會選擇使用較低解析度拍攝,使成像時間短,並且減少儲存記憶體使用量;但使用較低解析度影像做檢測,會導致檢測的結果變差,且人工檢視也變得較不容易;因此,我們發展影像超級解析度 (image super-resolution) 來達到上述的目的。
    在影像超級解析度的傳統方法上,恢復的影像都會變得較模糊,失去了影像的細節及紋理,進而導致自動化瑕疵檢測時發生漏檢的情況。隨著科技蓬勃發展,深度學習 (deep learning) 的演算法逐漸取代傳統方法,能恢復出品質較高的高解析度影像,且包留較多影像的細節。因此在本研究中,我們運用深度卷積神經網路 (convolutional neural networks, CNN) 結合影像處理,提出一個影像超級解析度系統,使恢復的高解析度影像能保留較多的細節,並在經過大津二值化後,能得到與實際的高解析度影像經過大津二值化後,相近的影像結果。
    在本研究中,我們使用影像處理並搭配修改後E-RFDN (Enhanced-Residual Feature Distillation Network) 架構作為主要的影像超級解析度網路骨幹,內容包括:i.將影像前處理,把影像銳利化後再去除雜訊,使影像訓練時不受雜訊的影響;ii.修改處理訓練集的方法,將隨機裁切改為將影像完全裁切,使網路能學習到更多樣的特徵;iii.將E-RFDN架構中的位置注意力移除,使網路訓練時不會因為注意力機制的關係,讓樣本間變得具有相關性,導致重建結果不佳,也讓網路運算速度變得更加快速;iv.為了因應影像裁切方法,修改損失函數計算的範圍,讓網路集中學習有效的像素。
    在實驗中,我們共收集了1,493張電子元件的X光高解析度影像,其中訓練集有1,193張,驗證集有300張,在訓練集的部分,會使用我們提出的資料集處理方法,產生共11,567對的低解析度與高解析度影像,最後,使用影像擴增方法將訓練集的樣本總數提高至92,536對。
    在實驗結果上,我們分兩個階段說明。第一個階段為評估函數的比較,此階段是在影像前處理的部份,只將影像去除雜訊,並搭配了影像擴增、新的訓練集處理方法、與修改損失函數的計算範圍的環境下,使用經過修改後的神經網路架構,其訓練集的峰值訊噪比 (peak value signal-to-noise ratio, PSNR) 提升約5.91 dB,條件像素準確率 (conditional pixel accuracy, CPA) 提升約3.581%,大津二值化錯誤率 (Otsu′s threshold error, OTE) 降低約0.1526%;驗證集的峰值訊噪比提升約5.96 dB,條件像素準確率提升約3.417%,大津二值化錯誤率降低約0.1428%。而第二個階段則是以視覺化的方式比較,在影像處理的部份加入了銳利化,先將輸入的影像銳利化後再去除雜訊,將其結果與僅做去除雜訊的結果相比,在大津二值化的影像上,能恢復因影像去雜訊而消失的瑕疵區塊,更貼近原始的大津二值化影像。
    ;In the automatic defect detection of printed circuit board (PCB), X-ray images of electronic components are used to detect defective areas. In order to further improve the efficiency of shooting X-ray images, we will choose to use a lower resolution to shoot electronic components. In addition to getting X-ray images faster, this method also reduces storage memory usage. However, if low-resolution images are used for defect detection, the performance will become poor. It also becomes more difficult to find defection if manual visual inspection is required. Thus, we develop image super-resolution to achieve these purposes.
    In the traditional method of image super-resolution, the restored image will become blurred, and the details and texture of the image will be lost, which will lead to missed detection during automatic defect detection. With the development of technology, the deep learning algorithm gradually replaces the traditional method, and the recovered high-resolution images have higher quality and retain more details of the images. Therefore, we use deep convolutional neural networks (CNN) combined with image processing to propose an image super-resolution system, which enables the restored high-resolution images to retain more details. And then, after Otsu binarization, an image result similar to the actual high-resolution image after Otsu binarization.
    In this study, we use image processing and modified E-RFDN architecture as the main image super-resolution network backbone. The content include: i. do image pre-processing, which do image sharpening and then remove image noise, so that it is not affected by noise during training; ii. modify the method of processing the training set, changing the random cropping to full cropping of the image, so that the network can learn more diverse features; iii. the position attention in the E-RFDN architecture is removed, accelerating inference time. Moreover, when we training the model, we will not get poor performance because of the attention mechanism which makes correlations between samples. iv. in order to adapt to the image cropping method, the range of the loss function calculation is modified to allow the network to focus on valid pixels.
    In the experiments, we collected 1,493 high-resolution X-ray images of electronic components, of which 1,193 images were in the training set and 300 images were in the validation set. In the training set, 11,567 pairs of low- and high-resolution images were generated using our method for processing the training set. And then, the total number of samples in the training set was increased to 92,536 pairs using the image augmentation method.
    In the experimental results, we illustrate in two stages. The first stage is the comparison of the evaluation function. In this stage, when we do image pre-processing, we only remove noise from the image. In the environment of using the new calculation range of loss function with data augmentation, the new method of processing training set, and modified network architecture. In the training set, peak value signal-to-noise ratio increased 5.91 dB, conditional pixel accuracy increased 3.581%, and Otsu′s threshold error reduced 0.1526%. In the validation set, peak value signal-to-noise ratio increased 5.96 dB, conditional pixel accuracy increased 3.417%, and Otsu′s threshold error reduced 0.1428%. The second stage is visual comparison. We add image sharpening to the image pre-processing, that is, doing the image sharpening before removing noise from the input image. Compared with the result of only remove noise from the image, the result of this stage can restore the defect areas that have disappeared due to image de-noising on the Otsu′s binary image, which is closer to the original Otsu′s binary image.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML28檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明