English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41633978      線上人數 : 3515
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89903


    題名: 電子元件影像匹配的深度學習系統;A Deep Learning System for Matching Electronic- component Images
    作者: 李亭儀;Li, Ting-Yi
    貢獻者: 資訊工程學系
    關鍵詞: 匹配;深度學習
    日期: 2022-08-06
    上傳時間: 2022-10-04 12:04:09 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著科技的日新月異,使用到電子元件的產品也越來越多。為了大量生產,自動化的生產方式已是時代的潮流。電子元件產品幾乎都會用到印
    刷電路板 (printed circuit board, PCB),而印刷電路板上有大大小小,不同形狀的電子元件,這些微小電子元件若使用人工搜尋,會耗費大量的人力及時間成本,因此以機器代替人工搜尋是必然的趨勢。自動化的搜尋方式可以將要搜尋的元件做成模板影像,再以匹配 (matching) 的技術搜尋待測大影像中的相同元件。
    過去對於印刷電路板的特徵匹配已有許多相關研究,但其大多為傳統演算法,對於不同種類的電子元件不能有效的偵測出來,且也容易受到元件變異與背景的影響。因此,在本研究中,我們提出以卷積神經網路為基礎的匹配網路來彌補上述之缺點。藉由輸入相同電子元件種類之影像作為模板影像 (template image),並在搜尋影像 (search image) 上尋找是否有相匹配之元件。匹配只需要偵測搜尋影像上是否有與模板影像匹配的部份,不需要學習電子元件之種類,能廣泛的加以應用。
    在本研究中,我們修改單一追蹤網路的 SiamCAR 網路來做為我們的
    匹配網路,主要修改內容為:i. 將單一目標追蹤改成多個相似物件之匹
    配,ii. 損失函數的改進。另外,在訓練中,我們還加入影像前處理、及
    學習率策略等附加處置。
    在實驗中,我們共收集了 851 張電子元件的影像資料,其中訓練集有 794 張,總共有 4,601 對影像,驗證集有 57 張,總共有 291 對影像。我們以 1,200×1,200 解析度的影像測試,最終改進的精密度 (precision) 為98.33%,召回率 (recall) 為 94.60%。與原本的 SiamCAR 網路比較,精密度從 84.81%提升到 98.33%,提升 13.52%,召回率從 66.02%提升到 94.60%,提升 28.58%。
    ;With the progress of the times, science and technology change rapidly, more and more products use electronic components. For mass production, automated production methods have become the trend of the times. Almost all
    electronic component products use printed circuit boards (PCBs), and there are large and small electronic components of different shapes on the printed circuit board. If these tiny electronic components use manual search, it will consume a lot of manpower and time costs, so it is an inevitable trend to replace manual search with machines. The automatic search method can make the components to be searched into a template image, and then use the matching technology to search for the same components in the large image to be tested.
    In the past, there have been many related studies on feature matching of printed circuit boards, but most of them are traditional algorithms, which cannot effectively detect different types of electronic components, and are also
    easily affected by component variation and background. Therefore, in this study, we propose a convolutional neural network-based matching network to remedy the above shortcomings. By inputting an image of the same electronic
    component type as a template image, and searching for a matching component on the search image. Matching only needs to detect whether there is a part of the search image that matches the template image, and does not need to learn
    the types of electronic components, which can be widely used.
    In this study, we modify the SiamCAR architecture of a single tracking network as our matching network. The main modifications include: i. Change single target tracking to matching of multiple similar objects; ii. Improvement
    of loss function. In addition, in training, we also add additional processing such as image preprocessing and learning rate strategy.
    In the experiment, we collected 851 images of electronic components, including 794 in the training set, with 4,601 pairs of images, and 57 in the validation set, with 291 pairs of images. We tested with 1,200×1,200 resolution images, and the final improved precision is 98.43%, and the recall is 96.01%.Compared with the original SiamCAR network, the precision increased from 84.81% to 98.33%, an increase of 13.52%, and the recall increased from 66.02% to 94.60%, an increase of 28.58%.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML36檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明