博碩士論文 109522155 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:35 、訪客IP:18.116.13.171
姓名 彭啟恩(mushding)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 3D球柵陣列斷層影像瑕疵檢測的深度學習系統
(Deep learning system for defect inspection on computed tomography images of 3D ball grid array)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-7-31以後開放)
摘要(中) 本研究旨在利用深度卷積神經網路 (deep convolutional network) 檢測3D球柵陣列 (ball grid array, BGA) 中錫點大小變化不一致的瑕疵。球柵陣列是一種表面黏著技術 (surface mount technology, SMT),錫點按陣列形式排列在積體電路元件下面,具有縮小積體電路、增加接腳、更好的散熱等優勢。空焊是指錫點上的錫球並未完美貼合在印刷電路板 (printed circuit board, PCB) 及電路元件之間,造成電路不通或不穩定。
由於錫球被夾在兩印刷電路板之間,無法直接觀察其形狀和大小,為了檢測球柵陣列中的錫球是否有瑕疵,因此利用 X 光檢測儀,對錫球的不同高度進行斷層掃描,依據物質吸收能量大小轉換對應之灰階,透過比較影像間灰階差異,可判斷是否為瑕疵影像。本實驗旨在檢測球柵陣列錫點中大小變化不一致的瑕疵,而傳統的影像處理方法無法有效地擷取到這類錫球變化特徵。
在本研究我們蒐集許多X光斷層掃描影像,並將這些影像作為深度卷積神經網路的輸入,讓網路模型擷取出斷層掃描影像間的特徵,找出瑕疵錫球,期望能提供一種端到端 (end-to-end) 且兼具高效率及高精度的自動光學檢測 (automatic optical inspection, AOI) 技術,降低球柵陣列中瑕疵錫球的漏檢率。
在本研究中,我們使用影像處理並搭配修改後CBAM-ResNet-50 架構作為主要的網路骨幹,內容包括:i. 使用強化對比,把斷層掃描影像中錫球與背景區別開,使網路訓練時有效學習到邊界資訊;ii. 增加差分影像,引導網路進一步學習斷層掃描影像中錫球大小變化特徵;iii. 網路架構加入預訓練及注意力模組,加強網路尋找特徵的能力。
本研究的訓練與測試資料是一組三張X光掃描錫球不同高度的斷層掃描影像所組成。資料集分為正常與瑕疵兩種類別,其中正常影像共有4,525組,瑕疵影像共有592組。訓練集正常影像共有3,621組,瑕疵影像共有472組;測試集正常影像共有904組,瑕疵影像共有120組。最後,使用影像擴增方法只將瑕疵的樣本總數提高10倍,總數量為5,920組。
在實驗結果上,我們分別分析資料處理和網路架構的改進對結果的影響。資料處理包括強化對比、差分影像、CLAHE 3.0、資料擴增、及亮暗度調整,使召回率從80.33% 提升到98.59%。網路架構加入使用ImageNet預訓練模型和加入CBAM注意力模組,使召回率從98.59% 提升到99.66%。
摘要(英) This study aims to use deep convolutional neural network to detect the variation defect of ball grid array (BGA) packaging. Ball grid array packaging is a surface mount technology (SMT) that uses circular solder joints to distribute in an array form under integrated circuit components, with advantages such as reducing integrated circuit size, increasing pins, and better heat dissipation. Voiding refers to the solder ball not perfectly adhering between the printed circuit board (PCB) and the circuit components, causing the circuit to be disconnected or unstable.
Since the solder balls are sandwiched between two printed circuit boards and cannot directly observe their shape and size, in order to detect whether there are defects in the solder balls in the ball grid array, X-ray inspection equipment is used to scan the solder balls. X-ray inspection equipment can use the characteristics of X-rays to perform tomographic scanning at different heights of the solder balls, and convert the absorption energy of different materials into gray levels. By comparing the gray level differences of different tomographic images, we can judge whether there are defects in the solder balls. And traditional image processing methods cannot effectively extract the features of solder balls between different tomographic images.
In this study, we collected many X-ray tomographic images and used these images as inputs to a deep convolutional neural network, allowing the network model to learn the correlation between tomographic images and find defective solder balls. This study expects to provide an end-to-end automatic optical inspection (AOI) technology that is both efficient and accurate, and reduce the missed detection rate of defective solder balls in ball grid arrays.
In this study, we use image processing and modify CBAM-ResNet-50 architecture as the main network backbone, including: i. preprocessing the image, distinguish the solder point from the background using enhanced contrast, so that the network can effectively learn the boundary information when training; ii. increasing differential images, guide network to further learn solder point size change feature; iii. adding pre-training and attention module to network architecture, enhance network′s ability to find features.
The training and testing data are X-ray slice images composed of different heights and intensities. A group of images consists of three slice images. The data set is divided into two categories: normal and defective. Among them, there are 4,525 groups of normal images and 592 groups of defective images. The training set has 3,621 groups of normal images and 472 groups of defective images; The validation set has 904 groups of normal images and 120 groups of defective images. Finally, we use data augmentation method to increase the total number of defective samples by 10 times, with a total number of 5,920 groups.
In terms of experimental results, we analyze the impact of data processing and network architecture improvement on the results respectively. Data processing includes enhanced contrast, differential image, CLAHE 3.0, data augmentation and brightness adjustment, which improves recall rate from 80.33% to 98.59%. The network architecture adds ImageNet pre-trained model and CBAM attention module, which improves recall rate from 98.59% to 99.66%.
關鍵字(中) ★ 深度學習
★ 球柵陣列
★ 瑕疵檢測
關鍵字(英)
論文目次 摘要 ......... ii
Abstract ......... iv
致謝 ......... vi
目錄 ......... vii
圖目錄 ......... viii
表目錄 ......... x
第一章 緒論 ......... 1
1.1 研究動機及目的 ......... 1
1.2 系統架構 ......... 2
1.3 系統特色 ......... 5
1.4 論文架構 ......... 6
第二章 相關研究 ......... 7
2.1 斷層掃描影像處理 ......... 7
2.2 注意力機制 ......... 15
第三章 改進的瑕疵檢測系統 ......... 19
3.1 多張斷層掃描影像輸入網路方法 ......... 19
3.2 影像前處理 ......... 19
3.3 基於ResNet-50網路架構修改 ......... 29
第四章 實驗 ......... 34
4.1 實驗設備與開發環境 ......... 34
4.2 瑕疵分類網路的訓練 ......... 34
4.3 評估準則 ......... 37
4.4 實驗與結果 ......... 38
第五章 結論與未來展望 ......... 48
參考文獻 ......... 49
參考文獻 [1] T. D. Moore, D. Vanderstraeten, and P. M. Forssell. “Three-dimensional X-ray laminography as a tool for detection and characterization of BGA package defects,” IEEE Tran. on Components and Packaging Technologies, vol.25, no.4, pp.224-229, 2002.
[2] M. S. Laghari and Q. A. Memon. “Identification of faulty BGA solder joints in X-ray images,” Int. Journal of Future Computer and Communication, vol.4, no.2, pp.122-125, 2015.
[3] Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. of Neural Information Processing Systems (NIPS), Lake Tahoe, NV, Dec.3-8, 2012, pp.1106-1114.
[4] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv:1512.03385v1.
[5] R. Wightman, “PyToch image models,” https://github.com/rwightman/pytorch-image-models.
[6] R. Wightman, H. Touvron, and H. Jégou, “Resnet strikes back: An improved training procedure in timm,” arXiv:2110.00476v1.
[7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, “ImageNet: a large-scale hierarchical image database,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Miami, FL, Jun.20-25, 2009, pp.248-255.
[8] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, Inception-ResNet and the impact of residual connections on learning,” arXiv:1602.07261v2.
[9] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q. V. Le, and H. Adam, “Searching for MobileNetV3,” arXiv:1905.02244v5.
[10] M. Tan and Q. V. Le, “EfficientNet: rethinking model scaling for convolutional neural networks,” arXiv:1905.11946v5.
[11] A. Brock, S. De, and S. L. Smith, “Characterizing signal propagation to close the performance gap in unnormalized ResNets,” arXiv:2101.08692v2.
[12] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” arXiv:1501.00092v3.
[13] C. Dong, C. C. Loy, and X. Tang, “Accelerating the super-resolution convolutional neural network,” arXiv:1608.00367v1.
[14] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” arXiv:1609.05158.
[15] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” arXiv:1707.02921.
[16] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep Laplacian pyramid networks for fast and accurate super-resolution,” arXiv:1704.03915.
[17] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” arXiv:1511.04587.
[18] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” arXiv:1311.2524v5.
[19] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” arXiv:1506.01497v3.
[20] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” arXiv:1612.03144.
[21] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: optimal speed and accuracy of object detection,” arXiv:2004.10934v1.
[22] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” arXiv:1411.4038v2.
[23] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” arXiv:1511.00561v3.
[24] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” arXiv:1505.04597v1.
[25] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: redesigning skip connections to exploit multiscale features in image segmentation,” arXiv:1912.05074v2.
[26] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” arXiv:1703.06870.
[27] D. Maturana and S. Scherer, “VoxNet: a 3D convolutional neural network for real-time object recognition,” in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Sep.28-Oct.3, 2015, pp.922-928.
[28] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3D convolutional networks,” arXiv.1412.0767.
[29] H. Zunair, A. Rahman, N. Mohammed, and J. P. Cohen, “Uniformizing techniques to process CT scans with 3D CNNs for tuberculosis prediction,” arXiv:2007.13224.
[30] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” arXiv:1505.04597.
[31] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-Net: fully convolutional neural networks for volumetric medical image segmentation,” arXiv:1606.04797.
[32] A. Hatamizadeh, Y. Tang, V. Nath, D. Yang, A. Myronenko, B. Landman, and D. Xu, “UNETR: transformers for 3D medical image segmentation,” arXiv:2103.10504.
[33] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit and N. Houlsby, “An image is worth 16x16 words: transformers for image recognition at scale,” arXiv:2010.11929.
[34] Q. Zhang, M. Zhang, C. Gamanayake, C. Yuen, Z. Geng and H. Jayasekaraand, “Deep learning based defect detection for solder joints on industrial x-ray circuit board images,” arXiv:2008.02604.
[35] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: visual explanations from deep networks via gradient-based localization,” arXiv:1610.02391.
[36] A. Chattopadhyay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-CAM++: improved visual explanations for deep convolutional networks,” arXiv:1710.11063.
[37] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-excitation networks,” arXiv:1709.01507v4.
[38] A. F. Agarap, “Deep learning using rectified linear units (ReLU),” arXiv:1803.08375.
[39] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q.-V. Le, and H. Adam, “Searching for MobileNetV3,” arXiv:1905.02244.
[40] J. Li, J. Wang, Q. Tian, W. Gao, and S. Zhang, “Global-local temporal representations for video person re-identification,” arXiv:1908.10049.
[41] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for scene segmentation,” arXiv:1809.02983v4.
[42] S. Woo, J. Park, J.-Y. Lee, and I. Kweon, “CBAM: convolutional block attention module,” arXiv:1807.06521v2.
指導教授 曾定章(Din-Chang Tseng) 審核日期 2023-7-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明