參考文獻 |
[1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to
document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[2] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D.
Jackel, "Backpropagation applied to handwritten zip code recognition," Neural
computation, vol. 1, no. 4, pp. 541-551, 1989.
[3] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image
recognition," arXiv preprint arXiv:1409.1556, 2014.
[4] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-
778, 2016.
[5] J. Amara, B. Bouaziz, and A. Algergawy, "A deep learning-based approach for banana leaf
diseases classification," Datenbanksysteme für Business, Technologie und Web (BTW
2017)-Workshopband, 2017.
[6] I. Goodfellow, Y. Bengio, and A. Courville, "Convolutional Networks" in Deep learning,
MIT press, pp. 321-362. 2016.
[7] M. Elhoushi, Z. Chen, F. Shafiq, Y. H. Tian, and J. Y. Li, "Deepshift: Towards
multiplication-less neural networks," in Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pp. 2359-2368, 2021.
[8] R. Maini and H. Aggarwal, "A comprehensive review of image enhancement techniques, "
arXiv preprint arXiv:1003.4053, 2010.
[9] L. Hong, Y. Wan, and A. Jain, "Fingerprint image enhancement: algorithm and
performance evaluation," IEEE transactions on pattern analysis and machine intelligence,
vol. 20, no. 8, pp. 777-789, 1998.
[10] S. Ritter, D. G. Barrett, A. Santoro, and M. M. Botvinick, "Cognitive psychology for deep
neural networks: A shape bias case study," in International conference on machine learning,
pp. 2940-2949, 2017.
[11] R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel,
"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves
accuracy and robustness," arXiv preprint arXiv:1811.12231, 2018.
[12] H. Li, X.-j. Wu, and T. S. Durrani, "Infrared and visible image fusion with ResNet and
zero-phase component analysis," Infrared Physics & Technology, vol. 102, p. 103039,
2019.
[13] H. Li, X.-J. Wu, and J. Kittler, "Infrared and visible image fusion using a deep learning
framework," in 2018 24th international conference on pattern recognition (ICPR), pp.
2705-2710, 2018.
[14] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M.
Dehghani, M. Minderer, G. Heigold, and S. Gelly, "An image is worth 16x16 words:
Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
[15] I. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, D.
Keysers, J. Uszkoreit, and M. Lucic, "Mlp-mixer: An all-mlp architecture for vision, "
arXiv preprint arXiv:2105.01601, 2021.
[16] W. Samek, T. Wiegand, and K.-R. Müller, "Explainable artificial intelligence:
Understanding, visualizing and interpreting deep learning models," arXiv preprint
arXiv:1708.08296, 2017.
[17] R. C. Fong and A. Vedaldi, "Interpretable explanations of black boxes by meaningful
perturbation," in Proceedings of the IEEE international conference on computer vision, pp.
3429-3437, 2017.
[18] D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg, "Smoothgrad: removing
noise by adding noise," arXiv preprint arXiv:1706.03825, 2017.
[19] M. Sundararajan, A. Taly, and Q. Yan, "Axiomatic attribution for deep networks," in
International conference on machine learning, pp. 3319-3328, 2017.
[20] S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek, "On pixelwise explanations for non-linear classifier decisions by layer-wise relevance propagation, "
PloS one, vol. 10, no. 7, p. e0130140, 2015.
[21] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Learning deep features for
discriminative localization," in Proceedings of the IEEE conference on computer vision
and pattern recognition, pp. 2921-2929, 2016.
[22] M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in
European conference on computer vision, pp. 818-833, 2014.
[23] K. Simonyan, A. Vedaldi, and A. Zisserman, "Deep inside convolutional networks:
Visualising image classification models and saliency maps," arXiv preprint
arXiv:1312.6034, 2013.
[24] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam:
Visual explanations from deep networks via gradient-based localization," in Proceedings
of the IEEE international conference on computer vision, pp. 618-626, 2017.
[25] N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in 2005
IEEE computer society conference on computer vision and pattern recognition (CVPR′05),
vol. 1, pp. 886-893, 2005.
[26] T. Ojala, M. Pietikainen, and D. Harwood, "Performance evaluation of texture measures
with classification based on Kullback discrimination of distributions," in Proceedings of
12th international conference on pattern recognition, vol. 1, pp. 582-585, 1994.
[27] D. J. Jobson, Z.-u. Rahman, and G. A. Woodell, "Properties and performance of a
center/surround retinex," IEEE transactions on image processing, vol. 6, no. 3, pp. 451-
462, 1997.
[28]C. W. Niblack, R. Barber, W. Equitz, M. D. Flickner, E. H. Glasman, D. Petkovic, P. Yanker,
C. Faloutsos, and G. Taubin, "QBIC project: querying images by content, using color,
texture, and shape," in Storage and retrieval for image and video databases, vol. 1908, pp.
173-187, 1993.
[29] J. Canny, "A computational approach to edge detection," IEEE Transactions on pattern
analysis and machine intelligence, no. 6, pp. 679-698, 1986.
[30] T. Ojala, M. Pietikainen, and T. Maenpaa, "Multiresolution gray-scale and rotation
invariant texture classification with local binary patterns," IEEE Transactions on pattern
analysis and machine intelligence, vol. 24, no. 7, pp. 971-987, 2002.
[31] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-
444, 2015.
[32] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep
convolutional neural networks," Advances in neural information processing systems, vol.
25, 2012.
[33] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke,
and A. Rabinovich, "Going deeper with convolutions," in Proceedings of the IEEE
conference on computer vision and pattern recognition, pp. 1-9, 2015.
[34] C.-H. Chen, M.-Y. Lin, and X.-C. Guo, "High-level modeling and synthesis of smart sensor
networks for Industrial Internet of Things," Computers & Electrical Engineering, vol. 61,
pp. 48-66, 2017.
[35] M. Mora, O. Adelakun, S. Galvan-Cruz, and F. Wang, "Impacts of IDEF0-Based Models
on the Usefulness, Learning, and Value Metrics of Scrum and XP Project Management
Guides," Engineering Management Journal, pp. 1-17, 2021.
[36] R. Julius, T. Trenner, A. Fay, J. Neidig, and X. L. Hoang, "A meta-model based
environment for GRAFCET specifications," in 2019 IEEE International Systems
Conference (SysCon), pp. 1-7, 2019.
[37] P. Novotný and T. Suk, "Leaf recognition of woody species in Central Europe," Biosystems
Engineering, vol. 115, no. 4, pp. 444-452, 2013.
[38] J. Gu, P. Yu, X. Lu, and W. Ding, "Leaf species recognition based on VGG16 networks
and transfer learning," in 2021 IEEE 5th Advanced Information Technology, Electronic
and Automation Control Conference (IAEAC), vol. 5, pp. 2189-2193, 2021 |