摘要(英) |
Printed circuit board is a basic component of various electronic products, a structural component formed by insulating materials supplemented by conductor wiring. Mainly used to carry electronic components, using the electronic circuit formed by the circuit board to connect various electronic components together, as a bridge for communication between circuits; widely used in aerospace military, precision instruments, computers, communications, various industrial products , and different consumer electronics products.
The quality of printed circuit boards deeply affects the performance of various electronic products. Only good quality printed circuit boards can maintain excellent electronic products. There will always be a small amount of abnormalities in the manufacture of any product; these abnormal flaws must be detected in order to deliver good-quality printed circuit boards to downstream manufacturers to continue to manufacture excellent electronic products. The defect detection of traditional automated optical inspection is easily affected by the complexity of the light source and the printed circuit board itself, and cannot effectively improve the detection accuracy. In recent years, deep learning technology has risen, and it has outstanding performance in all walks of life. Naturally, automatic optical inspection does not fall behind, and actively introduces deep learning technology in order to simultaneously improve the detection rate and screening rate of inspection.
In this paper, we propose a defect detection system for printed circuit boards based on a capsule network. The first part is the original pure capsule network. We discuss the design of the capsule network and analyze the key components; for example, dynamic routing algorithm, squash function, primary capsule. In this direction to optimize the original network architecture, the second part is the modification of the convolution module. We reduced the convolution layer of the original capsule network to compare the original network architecture. The third part is the expansion model, we discuss the impact of the expansion of the convolution layer and capsule layer on the performance of the model, the fourth part is the combination of depth convolution and capsule, we discuss the different types of depth convolution as extraction features applicability; for example, Inception, DenseNet, ResNet, VGGNet, and MobileNet, based on this as an architecture combined with the capsule layer, the final improved version is proposed.
In terms of experiments, in the stage of the original pure capsule network, we adjusted the squeeze function, dynamic routing algorithm, and the dimension of the primary capsule to verify the real changes of the modification to the original capsule network. The test results were consistent with the original capsule. Compared with the network, the accuracy, precision, and recall rate have increased by 1.86%, 1.87%, and 1.86% respectively. In the stage of modifying the convolution module, we combined the capsule layer with a simple convolution layer. The test results are consistent with the original capsule. Compared with the network, the accuracy, precision, and recall rate increased by 8.15%, 8.08%, and 7.99% respectively. In the expansion model stage, we deepened the convolution layer and capsule layer. The test results were compared with the original capsule network. The accuracy is better than 10.49%, the precision is better than 9.58%, and the recall rate is better than 10.58%. The depth convolution is combined with the capsule stage. After comparing the performance of different depth convolutions, we finally choose DenseNet as the depth convolution and capsule layer. In combination, the routing number of the last capsule layer was changed from 3 to 7, and the epoch was increased from 100 to 350, the optimizer Adam was changed to AdamW, and ReduceLROnPlateau was used as the learning rate strategy, and the final accuracy, precision, recall rate, and F-score all reached 99.22%. |
參考文獻 |
[1] S. Sabour, N. Frosst, and G. E. Hinton, "Dynamic routing between capsules," arXiv:1710.09829.
[2] L. Bachatene, V. Bharmauria, and S. Molotchnikoff, "Adaptation and neuronal network in visual cortex," in Visual Cortex - Current Status and Perspectives, S. Molotchnikoff and J. Rouat, ed., Intech, Ann Arbor, 2012, Ch.15.
[3] L. A. Dombetzki, "An overview over capsule networks," Network Architectures and Services, vol.25, pp.89-95, 2018.
[4] G. Hinton, S. Sabour, and N. Frosst, "Matrix capsules with EM routing," in Proc. Int. Conf. Learn. Represent (ICLR), Vancouver, CA, 2018, pp.1-15.
[5] A. Shahroudnejad, A. Mohammadi, and K. N. Plataniotis, "Improved explainability of capsule networks: relevance path by agreement," arXiv:1802.10204.
[6] A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. of Neural Information Processing Systems (NIPS), Harrahs and Harveys, Lake Tahoe, NV, Dec.3-8, 2012, pp.1106-1114.
[7] M. K. Patrick, A. F. Adekoya, A. A. Mighty, and B. Y. Edwar, " Capsule networks - a survey," Computer and Information Sciences, vol.34, no.1, pp.1295-1310, 2022.
[8] J. Li, Q. Zhao, N. Li, L. Ma, X. Xia, X. Zhang, N. Ding, and N. Li, "A survey on capsule networks: evolution, application, and future development," in Proc. IEEE Conf. on High Performance Big Data and Intelligent Systems (HPBD&IS) , Macau, China, Dec.5-7, 2021, pp.177-185.
[9] G. E. Hinton, A. Krizhevsky, and S. D. Wang, "Transforming auto-encoders," in Proc. Int. Corp. for Assigned Names and Numbers (ICANN), Berlin, Germany, Jun.21, 2011, pp.44-51.
[10] A. R. Kosiorek, S. Sabour, Y. W. Teh, and G. E. Hinton, "Stacked capsule autoencoders," arXiv:1906.06818.
[11] S. Venkatraman, S. Balasubramanian, and R. R. Sarma, "Building deep, equivariant capsule networks," arXiv:1908.01300.
[12] A. T. W. Sun, B. Deng, S. Sabour, S. Yazdani, G. Hinton, and K. M. Yi, "Canonical capsules: self-supervised capsules in canonical pose," arXiv:2012.04718.
[13] S. Sabour, A. Tagliasacchi, S. Yazdani, G. Hinton, and D. J. Fleet, "Unsupervised part representation by flow capsules," arXiv:2011.13920.
[14] D. Peer, S. Stabinger, and A. Rodriguez-Sanchez, "Limitation of capsule networks," Pattern Recognition Letters, vol.144, pp.68-74, 2021.
[15] I. Paik, T. Kwak, and I. Kim, "Capsule networks need an improved routing algorithm," arXiv:1907.13327.
[16] J. Gu, V. Tresp, and H. Hu, "Capsule network is not more robust than convolutional network," arXiv:2103.15459.
[17] V. Mazzia, F. Salvetti, and M. Chiaberge, "Efficient-CapsNet: capsule network with self-attention routing," Scientific Reports, vol.11, no.1, pp.1-13, 2021.
[18] J. B. Cordonnier, A. Loukas, and M. Jaggi, "On the relationship between self-attention and convolutional layers," arXiv:1911.03584.
[19] J. Zheng, X. Sun, H. Zhou, C. Tian, and H. Qiang, "Printed circuit boards defect detection method based on improved fully convolutional networks," IEEE Access, vol.10, pp.109908-109918, 2022.
[20] A. Legon, M. Deo, S. Albin, and M. Audette, "Detection and classification of PCB defects using deep learning methods," in Proc. IEEE Conf. on Microelectronics Design & Test Symposium (MDTS), NY, USA, May 23-26, 2022, pp.1-6.
[21] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv:1409.1556.
[22] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," arXiv:1512.00567.
[23] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," arXiv:1512.03385.
[24] G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," arXiv:1608.06993.
[25] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv:1704.04861.
[26] B. Ghosh, M. K. Bhuyan, P. Sasmal, Y. Iwahori, and P. Gadde, "Defect classification of printed circuit boards based on transfer learning," in Proc. IEEE Conf. on Applied Signal Processing Conference (ASPCON), Kolkata, India, Dec.7-9, 2018, pp.245-248.
[27] P. K. Sethy, S. K. Behera, P. K. Ratha, and P. Biswas, "Detection of coronavirus disease (COVID-19) based on deep features and support vector machine," Int. Journal of Mathematical, vol.5, no.4, pp.643-651, 2020.
[28] P. Afshara, S. Heidarianb, F. Naderkhania, A. Oikonomouc, K. N. Plataniotisd, and A. Mohammadi, "COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images," Pattern Recognition Letters, vol.138, pp.638-643, 2020.
[29] H. Quan, X. Xu, T. Zheng, Z. Li, M. Zhao, and X. Cui, "DenseCapsNet: detection of COVID-19 from X-ray images using a capsule neural network," NIH Trans. Computers in Biology and Medicine, PMCID: PMC8049190, vol.133, pp.104399-104410, 2021.
[30] T. Shamik and A. Jain, "Convolutional capsule network for COVID-19 detection using radiography images," NIH Trans. Int. Journal of Imaging Systems and Technology, PMCID: PMC8014502, vol.31, pp.525-539, 2021.
[31] T. Shamik and A. Jain, "A lightweight capsule network architecture for detection of COVID-19 from lung CT scans," NIH Trans. Int. Journal of Imaging Systems and Technology, PMCID: PMC9015631, vol.32, pp.419-434, 2022.
[32] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, "MobileNetV2: inverted residuals and linear bottlenecks," arXiv:1602.07261.
[33] A. Howard, M. Sandler, G. Chu, L. C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q. V. Le, and H. Adam, "Searching for MobileNetV3," arXiv:1905.02244.
[34] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, "Inception V4, Inception-ResNet and the impact of residual connections on learning," arXiv:1602.07261.
[35] R. LaLonde and U. Bagci, "Capsules for object segmentation," arXiv: 1804.04241. |