參考文獻 |
[1] A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. of Neural Information Processing Systems (NIPS), Harrahs and Harveys, Lake Tahoe, NV, Dec.3-8, 2012, pp.1106-1114.
[2] S. Akçay, A. Atapour-Abarghouei, and T. P. Breckon, “GANomaly: semi-supervised anomaly detection via adversarial training,” arXiv:1805.06725.
[3] S. Akçay, A. Atapour-Abarghouei, and T P. Breckon, “Skip-GANomaly: skip connected and adversarially trained encoder-decoder anomaly detection,” arXiv:1901.08954.
[4] J. Yang, Y. Shi, and Z. Qi, “DFR: deep feature reconstruction for unsupervised anomaly segmentation,” arXiv:2012.07122.
[5] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661.
[6] M. Zeiler, D. Krishnan, G. Taylor, and R. Fergus, “Deconvolutional networks,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, Jun.13-18, 2010, pp.2528-2535.
[7] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv:1511.06434.
[8] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for scene segmentation,” arXiv:1809.02983v4.
[9] C.-Y. Liou, J.-C. Huang, and W.-C. Yang, “Modeling word perception using the Elman network,” Neurocomputing, vol.71, no.16-18, pp.3150-3157, Oct.2008.
[10] D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” arXiv:1312.6114.
[11] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther, “Autoencoding beyond pixels using a learned similarity metric,” arXiv:1512.09300.
[12] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv:1511.05644v2.
[13] L. Bergman, N. Cohen, and Y. Hoshen, “Deep nearest neighbor anomaly detection,” arXiv:2002.10445v1.
[14] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, “ImageNet: a large-scale hierarchical image database,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Miami, FL, Jun.20-25, 2009, pp.248-255.
[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, Jun.27-30, 2016, pp.770-778.
[16] T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Trans. on Information Theory, vol.13, no.1, pp.21-27, 1967.
[17] N. Cohen and Y. Hoshen, “Sub-image anomaly detection with deep pyramid correspondences,” arXiv:2005.02357v3.
[18] T. Defard, A. Setkov, A. Loesch, and R. Audigier, “PaDiM: a patch distribution modeling framework for anomaly detection and localization,” arXiv:2011.08785v1.
[19] J. L. Suárez-Díaz, S. García, and F. Herrera, “A tutorial on distance metric learning: mathematical foundations, algorithms, experimental analysis, prospects and challenges with appendices on mathematical background and detailed algorithms explanation,” arXiv:1812.05944v3.
[20] K. Roth, L. Pemula, J. Zepeda, B. Schölkopf, T. Brox, and P. Gehler, “Towards total recall in industrial anomaly detection,” arXiv:2106.08265.
[21] D. Feldman, “Introduction to coresets: an updated survey,” arXiv:2011.09384v1.
[22] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” arXiv:1505.04597v1.
[23] D. Gong, L. Liu, V. Le, B. Saha, M. R. Mansour, S. Venkatesh, and A. van den Hengel, “Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection,” in Proc. IEEE Conf. on International Conference on Computer Vision (ICCV), Seoul, Korea (South), Oct.27-Nov.2, 2019, pp.1705-1714
[24] P. Bergmann, S. Lowe, M. Fauser, D. Sattlegger, and C. Steger, “Improving unsupervised defect segmentation by applying structural similarity to autoencoders,” arXiv:1807.02011v3.
[25] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Processing, vol.13, no.4, pp.600-612, 2004.
[26] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556.
[27] P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger, “Mvtec ad: a comprehensive real-world dataset for unsupervised anomaly detection,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, Jun.15-20, 2019, pp.9592-9600.
[28] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs “Unsupervised anomaly detection with generative adversarial networks to guide marker discovery,” arXiv:1703.05921v1.
[29] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in Proc. of Neural Information Processing Systems (NIPS), Barcelona, Spain, Dec.5-10, 2016, III-A1,pp.2234-2242.
[30] P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Jul.21-26, 2017, pp.5967-5976.
[31] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-excitation networks,” arXiv:1709.01507v4.
[32] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q.-V. Le, and H. Adam, “Searching for MobileNetV3,” arXiv:1905.02244.
[33] S. Woo, J. Park, J.-Y. Lee, and I. Kweon, “CBAM: convolutional block attention module,” arXiv:1807.06521v2.
[34] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv:1505.00853v2.
[35] S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167v3.
[36] A. F. M. Agarap, “Deep learning using rectified linear units (ReLU),” arXiv:1803.08375v2.
[37] D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980v9.
[38] Z. Zhang, T. He, H. Zhang, Z. Zhang, J. Xie, and M. Li, “Bag of freebies for training object detection neural networks.” arXiv:1902.04103v3. |