English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41635080      線上人數 : 2259
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86577


    題名: 電子元件之深度影像轉移與修補的深度學習系統;Depth Image Translation and Completion for Electronic Components using Deep Learning System
    作者: 張晏慈;Chang, Yen-Tzu
    貢獻者: 資訊工程學系
    關鍵詞: 深度學習;深度影像修補;深度影像轉移;deep learning;depth image completion;depth image translation
    日期: 2021-07-29
    上傳時間: 2021-12-07 12:59:33 (UTC+8)
    出版者: 國立中央大學
    摘要: 許多精密的機械零件及電子元件都需要透過三維資訊做精密的檢測,但非接觸性的視覺量測存在一些不完全的缺陷,若將量測結果應用於瑕疵檢測或模型重建上,會有遺漏或誤判的結果,因此深度資料的修補就成為一個重要議題。
    對於印刷電路板上的電子元件之深度影像修補議題,我們將發展一個監督式 (supervised) 的深度學習 (deep learning) 系統。深度學習是模擬人類神經元的構造及運作機制,藉由大量的訓練資料作為卷積神經網路 (convolutional neural network, CNN) 學習的依據,從訓練資料與誤差函數中學習如何調整修正網路中的記憶參數,最後得到一個完整的網路模型,可依照輸入樣本給出期望的推論結果。
    本研究目標是以一個網路架構來達成深度影像 (depth image) 修補 (completion) 與轉移 (translation) 兩種不同意義的應用任務。修補技術的修復品質佳,但訓練資料取得較不容易;至少需要瑕疵深度影像及完美深度影像;要得到更好結果,還需要瑕疵彩色影像。轉移技術的修復品質較差,但訓練資料只需要瑕疵彩色影像及完美深度影像。由於深度學習系統需要龐大的訓練資料,收集資料著實不易;若能依據手邊可取得的影像資料來選擇修補或轉移模式的話,能減少收集資料的成本並增加使用的彈性。由於兩類型任務的輸入影像張數及內容不一樣,為求一致,網路輸入端做了一些變動;對於不需要輸入的影像就以全黑影像代替。
    我們的網路是從輕量級RefineNet模型修改而來的;修改的主要內容包括:i.使用編碼/解碼的網路架構,ii.編碼使用有效率網路 (EfficientNet) 作為骨幹,解碼使用輕量級RefineNet的模組並加入前置激活殘差模組 (pre-activation residual block, PRB),iii.在解碼上採樣中使用反卷積 (deconvolution) 取代內插法,iv.調整網路編碼與解碼之間的複雜度,並在解碼中加入適應性殘差位置注意力 (residual spatial attention) 機制,更進一步提升網路的修補能力。
    在實驗中,我們使用印刷電路板上的電子元件影像作訓練及測試,包含瑕疵深度影像、完美深度影像、及瑕疵彩色影像。修改後的卷積神經網路架構相較於輕量級RefineNet,能有效提高原本輕量級RefineNet對於缺失區塊修復的能力,深度影像的平均絕對誤差減少了35 %。針對深度修補及轉移的運作模式,修補模式平均絕對誤差為1.39灰階值,而轉移模式平均絕對誤差為3.04灰階值。
    ;Many high-precise mechanical parts and electronic components need to be precisely inspected based on 3D information. However, non-contact visual inspection has some problems of incomplete 3D data. If the incomplete data are applied for defect detection or 3D model reconstruction, there would result in detection loss or false construction; thus, the depth image completion becomes an important issue currently.
    To the issue of depth image completion for electronic components on printed circuit boards, we will develop a supervised deep learning system. Deep learning is simulating the structure and operation mechanism of human neurons. Based on the training of a large amount of data, the convolutional neural network (CNN) can show off excellent ability and give an expected inference result according to the input data.
    The goal of this studying is using a CNN to achieve two different-meaning application tasks: depth image completion and translation. Depth image completion has a better repair quality, but needs more kinds of training data; flawed depth images and the related perfect depth images are necessary; if we want to obtain better results, flawed color images are needed. The repair quality from translation is limited, but only flawed color image and the perfect depth image are needed in training. In general, a deep learning system requires a huge amount of training data and the data need consuming large manpower to collect. If the completion or translation mode can be alternatively selected based on the available data at hand, it will reduce cost and enhance the flexibility of usage. The number and contents of the input images for the two different-meaning tasks are different, to simplify the input structure of the proposed CNN, we also need to input a full-black image for the unnecessary image.
    The proposed CNN is modified from the lightweight RefineNet. The main modifications include: (1) using the encoding/decoding architecture; (2) using EfficientNet as the backbone of encoder; using lightweight RefineNet modules and pre-activation residual block as the decoder; (3) using deconvolution operations in up-sampling to replace interpolation in decoder; (4) adjusting the complexity between encoder and decoder, and adding an adaptive residual spatial attention module to the decoder for further improving the ability of completion.
    In this experiment, we used images of electronic components on printed circuit board as training data, including flawed depth images, perfect depth images, and flawed color images. The proposed CNN can effectively repair missing parts; comparing with the lightweight RefineNet, the average absolute error of the depth image is reduced by 35 %. For completion and translation tasks, the mean absolute errors (MAE) of the completion and translation modes are 1.39 and 3.04 grayscale values, respectively.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML48檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明