博碩士論文 108522051 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator張晏慈zh_TW
DC.creatorYen-Tzu Changen_US
dc.date.accessioned2021-7-29T07:39:07Z
dc.date.available2021-7-29T07:39:07Z
dc.date.issued2021
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=108522051
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract許多精密的機械零件及電子元件都需要透過三維資訊做精密的檢測,但非接觸性的視覺量測存在一些不完全的缺陷,若將量測結果應用於瑕疵檢測或模型重建上,會有遺漏或誤判的結果,因此深度資料的修補就成為一個重要議題。 對於印刷電路板上的電子元件之深度影像修補議題,我們將發展一個監督式 (supervised) 的深度學習 (deep learning) 系統。深度學習是模擬人類神經元的構造及運作機制,藉由大量的訓練資料作為卷積神經網路 (convolutional neural network, CNN) 學習的依據,從訓練資料與誤差函數中學習如何調整修正網路中的記憶參數,最後得到一個完整的網路模型,可依照輸入樣本給出期望的推論結果。 本研究目標是以一個網路架構來達成深度影像 (depth image) 修補 (completion) 與轉移 (translation) 兩種不同意義的應用任務。修補技術的修復品質佳,但訓練資料取得較不容易;至少需要瑕疵深度影像及完美深度影像;要得到更好結果,還需要瑕疵彩色影像。轉移技術的修復品質較差,但訓練資料只需要瑕疵彩色影像及完美深度影像。由於深度學習系統需要龐大的訓練資料,收集資料著實不易;若能依據手邊可取得的影像資料來選擇修補或轉移模式的話,能減少收集資料的成本並增加使用的彈性。由於兩類型任務的輸入影像張數及內容不一樣,為求一致,網路輸入端做了一些變動;對於不需要輸入的影像就以全黑影像代替。 我們的網路是從輕量級RefineNet模型修改而來的;修改的主要內容包括:i.使用編碼/解碼的網路架構,ii.編碼使用有效率網路 (EfficientNet) 作為骨幹,解碼使用輕量級RefineNet的模組並加入前置激活殘差模組 (pre-activation residual block, PRB),iii.在解碼上採樣中使用反卷積 (deconvolution) 取代內插法,iv.調整網路編碼與解碼之間的複雜度,並在解碼中加入適應性殘差位置注意力 (residual spatial attention) 機制,更進一步提升網路的修補能力。 在實驗中,我們使用印刷電路板上的電子元件影像作訓練及測試,包含瑕疵深度影像、完美深度影像、及瑕疵彩色影像。修改後的卷積神經網路架構相較於輕量級RefineNet,能有效提高原本輕量級RefineNet對於缺失區塊修復的能力,深度影像的平均絕對誤差減少了35 %。針對深度修補及轉移的運作模式,修補模式平均絕對誤差為1.39灰階值,而轉移模式平均絕對誤差為3.04灰階值。zh_TW
dc.description.abstractMany high-precise mechanical parts and electronic components need to be precisely inspected based on 3D information. However, non-contact visual inspection has some problems of incomplete 3D data. If the incomplete data are applied for defect detection or 3D model reconstruction, there would result in detection loss or false construction; thus, the depth image completion becomes an important issue currently. To the issue of depth image completion for electronic components on printed circuit boards, we will develop a supervised deep learning system. Deep learning is simulating the structure and operation mechanism of human neurons. Based on the training of a large amount of data, the convolutional neural network (CNN) can show off excellent ability and give an expected inference result according to the input data. The goal of this studying is using a CNN to achieve two different-meaning application tasks: depth image completion and translation. Depth image completion has a better repair quality, but needs more kinds of training data; flawed depth images and the related perfect depth images are necessary; if we want to obtain better results, flawed color images are needed. The repair quality from translation is limited, but only flawed color image and the perfect depth image are needed in training. In general, a deep learning system requires a huge amount of training data and the data need consuming large manpower to collect. If the completion or translation mode can be alternatively selected based on the available data at hand, it will reduce cost and enhance the flexibility of usage. The number and contents of the input images for the two different-meaning tasks are different, to simplify the input structure of the proposed CNN, we also need to input a full-black image for the unnecessary image. The proposed CNN is modified from the lightweight RefineNet. The main modifications include: (1) using the encoding/decoding architecture; (2) using EfficientNet as the backbone of encoder; using lightweight RefineNet modules and pre-activation residual block as the decoder; (3) using deconvolution operations in up-sampling to replace interpolation in decoder; (4) adjusting the complexity between encoder and decoder, and adding an adaptive residual spatial attention module to the decoder for further improving the ability of completion. In this experiment, we used images of electronic components on printed circuit board as training data, including flawed depth images, perfect depth images, and flawed color images. The proposed CNN can effectively repair missing parts; comparing with the lightweight RefineNet, the average absolute error of the depth image is reduced by 35 %. For completion and translation tasks, the mean absolute errors (MAE) of the completion and translation modes are 1.39 and 3.04 grayscale values, respectively.en_US
DC.subject深度學習zh_TW
DC.subject深度影像修補zh_TW
DC.subject深度影像轉移zh_TW
DC.subjectdeep learningen_US
DC.subjectdepth image completionen_US
DC.subjectdepth image translationen_US
DC.title電子元件之深度影像轉移與修補的深度學習系統zh_TW
dc.language.isozh-TWzh-TW
DC.titleDepth Image Translation and Completion for Electronic Components using Deep Learning Systemen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明