參考文獻 |
[1] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to
document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[2] D. Gunning. “Explainable artificial intelligence (xai).” (2016), [Online]. Available: https:
//www.darpa.mil/program/explainable-artificial-intelligence (visited on 06/17/2024).
[3] European Parliament and Council of the European Union. “Regulation (eu) 2016/679 of
the european parliament and of the council of 27 april 2016 on the protection of natural
persons with regard to the processing of personal data and on the free movement of
such data, and repealing directive 95/46/ec (general data protection regulation).” (2016),
[Online]. Available: https://data.europa.eu/eli/reg/2016/679/oj (visited on 06/17/2024).
[4] C. J. Hoofnagle, B. Van Der Sloot, and F. Z. Borgesius, “The european union general data
protection regulation: What it is and what it means*,” Information & Communications
Technology Law, vol. 28, no. 1, pp. 65–98, 2019.
[5] C.-F. Yang et al., A cnn-based interpretable deep learning model, Master’s thesis, 2023.
[6] D. Purves, G. J. Augustine, D. Fitzpatrick, et al., Neuroscience, 3rd ed. Sinauer Associates Inc., 2004.
[7] M. F. Bear, B. W. Connors, and M. A. Paradiso, Neuroscience : exploring the brain /
Mark F. Bear, Barry W. Connors, Michael A. Paradiso. eng, Enhanced fourth edition.
Burlington, MA: Jones and Bartlett Learning, 2016.
[8] H. Baier, “Synaptic laminae in the visual system: Molecular mechanisms forming layers
of perception,” Annual Review of Cell and Developmental Biology, vol. 29, pp. 385–416,
2013.
[9] J. Hawkins and S. Blakeslee, On Intelligence. New York, NY: Times Books, 2004.
[10] E. R. Kandel, J. D. Koester, S. H. Mack, and S. A. Siegelbaum, Principles of Neural
Science, 6e. New York, NY: McGraw Hill, 2021.
[11] W. Swartout, C. Paris, and J. Moore, “Explanations in knowledge systems: Design for
explainable expert systems,” IEEE Expert, vol. 6, no. 3, pp. 58–64, 1991.
[12] S. A. and S. R., “A systematic review of explainable artificial intelligence models and
applications: Recent developments and future trends,” Decision Analytics Journal, vol. 7,
pp. 100–230, 2023.
[13] V. Chamola, V. Hassija, A. R. Sulthana, D. Ghosh, D. Dhingra, and B. Sikdar, “A review of trustworthy and explainable artificial intelligence (xai),” IEEE Access, vol. 11,
pp. 78 994–79 015, 2023.
[14] I. E. Nielsen, D. Dera, G. Rasool, R. P. Ramachandran, and N. C. Bouaynaya, “Robust explainability: A tutorial on gradient-based attribution methods for deep neural networks,”
IEEE Signal Processing Magazine, vol. 39, no. 4, pp. 73–84, Jul. 2022.
[15] L. Longo, M. Brcic, F. Cabitza, et al., “Explainable artificial intelligence (xai) 2.0: A
manifesto of open challenges and interdisciplinary research directions,” Information Fusion, vol. 106, pp. 102–301, 2024.
[16] L. Rokach, “Decision forest: Twenty years of research,” Information Fusion, vol. 27,
pp. 111–125, 2016.
[17] L. Grinsztajn, E. Oyallon, and G. Varoquaux, “Why do tree-based models still outperform deep learning on typical tabular data?” Advances in neural information processing
systems, vol. 35, pp. 507–520, 2022.
[18] S. Salzberg, “A nearest hyperrectangle learning method,” Machine learning, vol. 6, pp. 251–
276, 1991.
[19] J.-S. Jang, “Anfis: Adaptive-network-based fuzzy inference system,” IEEE Transactions
on Systems, Man, and Cybernetics, vol. 23, no. 3, pp. 665–685, 1993.
[20] J. Hatwell, M. M. Gaber, and R. M. A. Azad, “Chirps: Explaining random forest classification,” Artificial Intelligence Review, vol. 53, pp. 5747–5788, 2020.
[21] A. Binder, G. Montavon, S. Lapuschkin, K.-R. Müller, and W. Samek, “Layer-wise relevance propagation for neural networks with local renormalization layers,” in Artificial
Neural Networks and Machine Learning – ICANN 2016, A. E. Villa, P. Masulli, and A. J.
Pons Rivero, Eds., Cham: Springer International Publishing, 2016, pp. 63–71.
[22] M. T. Ribeiro, S. Singh, and C. Guestrin, “”why should i trust you?”: Explaining the
predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, ser. KDD ’16, San Francisco,
California, USA: Association for Computing Machinery, 2016, pp. 1135–1144.
[23] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,”
in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S.
Bengio, et al., Eds., vol. 30, Curran Associates, Inc., 2017.
[24] S. Ö. Arik and T. Pfister, “Tabnet: Attentive interpretable tabular learning,” Proceedings
of the AAAI Conference on Artificial Intelligence, vol. 35, no. 8, pp. 6679–6687, May
2021.
[25] K. Čyras, A. Rago, E. Albini, P. Baroni, and F. Toni, “Argumentative xai: A survey,”
in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence,
IJCAI-21, Z.-H. Zhou, Ed., International Joint Conferences on Artificial Intelligence Organization, Aug. 2021, pp. 4392–4399.
[26] J.-H. Chu. “Chapter 2 色彩體系.” (2009), [Online]. Available: https://www.charts.kh.
edu.tw/teaching-web/98color/color2-3.htm (visited on 06/17/2024).
[27] T. Riemersma. “Colour metric.” (2019), [Online]. Available: https://www.compuphase.
com/cmetric.htm (visited on 06/17/2024).
[28] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification,” in 2015 IEEE International Conference
on Computer Vision (ICCV), 2015, pp. 1026–1034.
[29] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward
neural networks,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Y. W. Teh and M. Titterington, Eds., ser. Proceedings of
Machine Learning Research, vol. 9, Chia Laguna Resort, Sardinia, Italy: PMLR, May
2010, pp. 249–256.
[30] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms,” 2017.
[31] A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (canadian institute for advanced research),”
[32] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, F.
Pereira, C. Burges, L. Bottou, and K. Weinberger, Eds., vol. 25, Curran Associates, Inc.,
2012.
[33] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,”
in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), Jun. 2016.
[34] C. Szegedy, W. Liu, Y. Jia, et al., “Going deeper with convolutions,” in Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015.
[35] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Jul. 2017.
[36] C. A. Poynton, ““gamma"and its disguises: The nonlinear mappings of intensity in
perception, crts, film and video,” in SMPTE Journal, vol. 102, Dec. 1993, pp. 1099–
1108. |