參考文獻 |
[1] E. R. Kandel, J. H. Schwartz, T. M. Jessell, S. Siegelbaum, A. J. Hudspeth, S. Mack, et al., Principles of neural science. McGraw-hill New York, 2000, vol. 4.
[2] S. Herculano-Houzel, “The human brain in numbers: A linearly scaled-up primate brain,”Frontiers in human neuroscience, p. 31, 2009.
[3] B. Fischl and A. M. Dale, “Measuring the thickness of the human cerebral cortex from magnetic resonance images,” Proceedings of the National Academy of Sciences, vol. 97, no. 20, pp. 11 050–11 055, 2000.
[4] J. Hawkins and S. Blakeslee, On intelligence, trans. by 洪蘭. Macmillan, 2004.
[5] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[6] T. Kohonen, “The self-organizing map,” Proceedings of the IEEE, vol. 78, no. 9, pp. 1464–1480, 1990.
[7] Y.-Y. Hsu et al., “基於多層自我組織映射圖之可視覺化深度學習模型,” Master’s thesis, National Central University, 2018.
[8] D. Gunning and D. Aha, “Darpa's explainable artificial intelligence (xai) program,” AI magazine, vol. 40, no. 2, pp. 44–58, 2019.
[9] P. Jackson, “Introduction to expert systems,” 1986.
[10] A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, et al., “Explainable artificial intelligence(xai): Concepts, taxonomies, opportunities and challenges toward responsible ai,” Information fusion, vol. 58, pp. 82–115, 2020.
[11] I. E. Nielsen, D. Dera, G. Rasool, R. P. Ramachandran, and N. C. Bouaynaya, “Robust explainability: A tutorial on gradient-based attribution methods for deep neural networks,”IEEE Signal Processing Magazine, vol. 39, no. 4, pp. 73–84, 2022.
[12] S. Salzberg, “A nearest hyperrectangle learning method,” Machine learning, vol. 6, pp. 251–276, 1991.
[13] J.-S. Jang, “Anfis: Adaptive-network-based fuzzy inference system,” IEEE transactions on systems, man, and cybernetics, vol. 23, no. 3, pp. 665–685, 1993.
[14] C.-T. Lin and C. S. G. Lee, “Neural-network-based fuzzy logic control and decision system,” IEEE Transactions on computers, vol. 40, no. 12, pp. 1320–1336, 1991.
[15] P. K. Simpson, “Fuzzy min-max neural networks. i. classification,” IEEE Trans. on Neural Networks, vol. 3, no. 5, pp. 776–786, 1992.
[16] M.-C. Su, “Use of neural networks as medical diagnosis expert systems,” Computers in biology and medicine, vol. 24, no. 6, pp. 419–429, 1994.
[17] Z. Yang, A. Zhang, and A. Sudjianto, “Enhancing explainability of neural networks through architecture constraints,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 6, pp. 2610–2621, 2020.
[18] A. Sudjianto, Z. Yang, and A. Zhang, “Single-index model tree,” IEEE Transactions on Knowledge and Data Engineering, 2021.
[19] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,”Advances in neural information processing systems, vol. 30, 2017.
[20] S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PloS one, vol. 10, no. 7, e0130140, 2015.
[21] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv preprint arXiv:1312.6034, 2013.
[22] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, Springer, 2014, pp. 818–833.
[23] M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” in International conference on machine learning, PMLR, 2017, pp. 3319–3328.
[24] D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg, “Smoothgrad: Removing noise by adding noise,” arXiv preprint arXiv:1706.03825, 2017.
[25] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
[26] M. Alber, S. Lapuschkin, P. Seegerer, et al., “Innvestigate neural networks!” J. Mach. Learn. Res., vol. 20, no. 93, pp. 1–8, 2019.
[27] M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 1135–1144.
[28] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026–1034.
[29] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
[30] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
[31] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[32] C. Szegedy, W. Liu, Y. Jia, et al., “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9. |