![]() |
以作者查詢圖書館館藏 、以作者查詢臺灣博碩士 、以作者查詢全國書目 、勘誤回報 、線上人數:265 、訪客IP:18.118.28.11
姓名 曹寓恆(Yu-Heng Cao) 查詢紙本館藏 畢業系所 電機工程學系 論文名稱 FastGDBN:一個消除計算冗餘實現高吞吐量的用於檢測處於缺陷晶片群集中之正常晶片的深度神經網絡
(FastGDBN: A High-Throughput DNN for Identifying Good Dies in Bad Neighborhoods via Computational Redundancy Elimination)相關論文 檔案 [Endnote RIS 格式]
[Bibtex 格式]
[相關文章]
[文章引用]
[完整記錄]
[館藏目錄]
至系統瀏覽論文 (2030-1-14以後開放)
摘要(中) 在現代IC設計和製造過程中超大型積體電路測試 (VLSI Testing) 擔任維持產品可靠度的重要角色。在常規的測試技術難以用合理的測試成本達到期望的測試覆蓋率 (Defect Coverage, DC) 的背景下,研究者應用資料分析技術開發出能夠進一步提高測試覆蓋率的測試方法。其中一種受到廣泛應用的方法稱為處於缺陷晶片群集中之正常晶片 (Good-Die-in-Bad-Neighborhood, GDBN) 檢測,此方法通過分析缺陷晶片群集的分佈樣態識別出具有高潛在風險的晶片,以滿足如車用電子、航太電子等對產品可靠度要求特別嚴苛的領域的需求。
近年來的研究應用深度神經網絡 (Deep Neural Network, DNN) 以追求更高的檢測準確度,然而深度神經網絡固有的高計算複雜度使得GDBN方法的檢測效率大幅降低。為了解決檢測效率退化問題,本文提出了一個高度平行的神經網絡架構名為FastGDBN,將推理時間複雜度降低至只與晶圓數量呈線性,並且藉由端到端 (end-to-end) 的架構減少了CPU與GPU之間的資料傳輸開銷 (CPU-GPU transfer overhead) 以及消除來自同一晶圓的多個晶片的檢測過程中的計算冗餘 (Redundant computation)。在 WM-811K 資料集上的實驗表明,相較於現有方法 FastGDBN 將最大收益提高了37.76倍並達到5,428倍的加速。摘要(英) In modern IC design and manufacturing processes, VLSI testing plays a critical role in maintaining product reliability. In the context of conventional testing techniques struggling to achieve the desired defect coverage at a reasonable cost, researchers have applied data analysis techniques to develop testing methods that can further enhance defect coverage. One widely used method is known as Good-Die-in-Bad-Neighborhood (GDBN) method, which identifies high-risk dies by analyzing the distribution patterns of defective die clusters. This approach meets the stringent reliability requirements of fields such as automotive electronics and aerospace electronics.
Recent studies have applied deep neural networks (DNNs) to achieve higher detection accuracy. However, the inherently high computational complexity of DNNs significantly reduces the efficiency of GDBN methods. To address the issue of detection efficiency degradation, we propose a highly parallelized neural network architecture called FastGDBN, which reduces inference time complexity to linear with respect to the number of wafers. Additionally, its end-to-end architecture minimizes CPU-GPU transfer overhead and eliminates redundant computations during the detection process of multiple dies from the same wafer. Experiments on the WM-811K dataset demonstrate that FastGDBN achieves a maximum gain increase of 37.76 times and an acceleration of 5,428 times compared to existing methods.關鍵字(中) ★ 晶圓圖可靠度分析
★ 卷積神經網絡
★ 檢測處於缺陷晶片群集中之正常晶片關鍵字(英) ★ Outlier detection
★ GDBN
★ CNN
★ Wafer map analysis論文目次 摘要 i
Abstract ii
致謝 iii
Table of Contents iv
Table of Figures vi
Table of Tables ix
Chapter 1 Introduction 1
1.1 VLSI Testing 1
1.2 Good Die in Bad Neighborhoods 2
1.3 Our Contributions 8
Chapter 2 Preliminaries 9
2.1 Previous GDBN Work Review 9
2.2 Evaluation Metrics 12
Chapter 3 Problem Formulation 15
Chapter 4 Proposed Methodology 16
4.1 Overview 16
4.2 Patch Encoder 17
4.3 Segmenter 18
4.4 Model Architecture 22
4.5 Dynamic Weighted Binary Cross-entropy Loss 24
Chapter 5 Experimental Results 25
5.1 Experimental Settings and Benchmarks 25
5.2 Analysis of the Leakage Problem 26
5.3 Analysis of the Observation Window Size 28
5.4 Analysis of the Sampling Period 29
5.5 Comparison Between Variants of FastGDBN and Previous Works 30
Chapter 6 Conclusions 37
Reference 38參考文獻 [1] R. B. Miller and W. C. Riordan, “Unit level predicted yield: a method of identifying high defect density die at wafer sort,” International Test Conference (ITC), 2001.
[2] G. Kim, Y. Moon, J. Kim, J. Jeong, E. Kim and S. Hur, “Kernel Smoothing Technique Based on Multiple-Coordinate System for Screening Potential Failures in NAND Flash Memory,” IEEE VLSI Test Symposium (VTS), 2023.
[3] C. -H. Yen et al., “Identifying Good-Dice-in-Bad-Neighborhoods Using Artificial Neural Networks,” VLSI Test Symposium (VTS), 2021.
[4] C. -M. Liu, C. -H. Yen, S. -W. Lee, K. -C. Wu and M. C. -T. Chao, “Enhancing Good-Die-in-Bad-Neighborhood Methodology with Wafer-Level Defect Pattern Information,” IEEE International Test Conference (ITC), 2023.
[5] M. -J. Wu, J. -S. R. Jang and J. -L. Chen, “Wafer Map Failure Pattern Recognition and Similarity Ranking for Large-Scale Data Sets,” IEEE Transactions on Semiconductor Manufacturing, 2015.
[6] A. G. Howard et al., “Mobilenets: Efficient convolutional neural networks for mobile vision applications.” arXiv preprint arXiv:1704.04861, 2017.
[7] C. -C. Lu et al., “Transformer and Its Variants for Identifying Good Dice in Bad Neighborhoods,” IEEE VLSI Test Symposium (VTS), 2024.
[8] W. Yu et al., “MetaFormer is Actually What You Need for Vision,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
[9] T. Xiao, Y. Liu, B. Zhou, Y. Jiang, J. Sun, “Unified perceptual parsing for scene understanding,” in European Conference on Computer Vision (ECCV), 2018.
[10] “Despite Short-Term Cyclical Downturn, Global Semiconductor Market’s Long-Term Outlook is Strong,” Semiconductor Industry Association (SIA), 2023.
[11] W. Shi et al., “Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network,” IEEE Computer Vision and Pattern Recognition (CVPR), 2016.
[12] L. -C. Chen, G. Papandreou, F. Schroff, H. Adam, “Rethinking atrous convolution for semantic image segmentation.” arXiv preprint arXiv:1706.05587, 2017.
[13] T. -Y. Lin, et al., “Feature pyramid networks for object detection.” IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2017.
[14] O. Ronneberger, P. Fischer, T. Brox, “U-net: Convolutional networks for biomedical image segmentation.” Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2015.
[15] A. Trockman and J. Z. Kolter, “Patches are all you need?.” arXiv preprint arXiv:2201.09792, 2022.
[16] S. Ioffe, C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift.” arXiv preprint arXiv:1502.03167, 2015.
[17] J. L. Ba, J. R. Kiros, G. E. Hinton, “Layer Normalization.” arXiv e-prints:arXiv-1607, 2016.
[18] M. Tan and Q. V. Le., ”Efficientnet: Rethinking model scaling for convolutional neural networks.” International Conference on Machine Learning (ICML), 2019.指導教授 陳聿廣(Yu-Guang Chen) 審核日期 2025-1-15 推文 plurk
funp
live
udn
HD
myshare
netvibes
friend
youpush
delicious
baidu
網路書籤 Google bookmarks
del.icio.us
hemidemi
myshare