參考文獻 |
Castillo, E., Sanchez-Marono, N., Alonso-Betanzos, A., & Castillo, C. (2007). Functional network topology learning and sensitivity analysis based on anova decomposition. Neural Computation, 19(1), 231–257.
Çetin, O., Temurtaş, F., & Gülgönül, Ş. (2015). An application of multilayer neural network on hepatitis disease diagnosis using approximations of sigmoid activation function. Dicle Medical Journal/Dicle Tip Dergisi, 42(2).
Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4), 303–314.
Dua, D., & Graff, C. (2017). UCI machine learning repository. Retrieved from http://archive.ics.uci.edu/ml
Eğrioğlu, E., Aladağ, Ç. H., & Günay, S. (2008). A new model selection strategy in artificial neural networks. Applied Mathematics and Computation, 195(2), 591–597.
Engelbrecht, A. P., & Cloete, I. (1996). A sensitivity analysis algorithm for pruning feedforward neural networks. In Proceedings of International Conference on Neural Networks (icnn’96) (Vol. 2, pp. 1274–1278).
Fan, J., Ma, C., & Zhong, Y. (2021). A selective overview of deep learning. Statistical science: a review journal of the Institute of Mathematical Statistics, 36(2), 264.
Fernández-Navarro, F., Carbonero-Ruz, M., Alonso, D. B., & Torres-Jiménez, M. (2016). Global sensitivity estimates for neural network classifiers. IEEE Transactions on Neural Networks and Learning Systems, 28(11), 2592–2604.
Fortuin, V., Garriga-Alonso, A., Wenzel, F., Rätsch, G., Turner, R., van der Wilk, M., & Aitchison, L. (2021). Bayesian neural network priors revisited. arXiv preprint arXiv:2102.06571.
Gevrey, M., Dimopoulos, I., & Lek, S. (2003). Review and comparison of methods to study the contribution of variables in artificial neural network models. Ecological Modelling, 160(3), 249–264.
Guidotti, E. (2020). calculus: High dimensional numerical and symbolic calculus in r. arXiv preprint arXiv:2101.00086.
Kowalski,P.A.,&Kusy,M.(2017).Sensitivityanalysisforprobabilisticneuralnetwork structure reduction. IEEE TransactionsonNeuralNetworksandLearningSystems, 29(5), 1919–1932.
Märtens, K., & Yau, C. (2020). Neural decomposition: Functional anova with variational autoencoders. In International Conference on Artificial Intelligence and Statistics (pp. 2917–2927).
Montavon, G., Lapuschkin, S., Binder, A., Samek, W., & Müller, K.-R. (2017). Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition, 65, 211–222.
Pizarroso, J., Portela, J., & Muñoz, A. (2020). Neuralsens: sensitivity analysis of neural networks. arXiv preprint arXiv:2002.11423.
Santner, T. J., Williams, B. J., & Notz, W. I. (2003). The Design and Analysis of Computer Experiments (Vol. 1). Springer Series in Statistics (SSS).
Shafi, I., Ahmad, J., Shah, S. I., & Kashif, F. M. (2006). Impact of varying neurons and hidden layers in neural network architecture for a time frequency application. In 2006 IEEE International Multitopic Conference (pp. 188–193).
Siddique, M. A. B., Khan, M. M. R., Arif, R. B., & Ashrafi, Z. (2018). Study and observation of the variations of accuracies for handwritten digits recognition with various hidden layers and epochs using neural network algorithm. In 2018 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT) (pp. 118–123).
Tibshirani, R., Hastie, T., Witten, D., & James, G. (2021). An Introduction to Statistical Learning: With Applications in R. Springer Texts in Statistics (STS, volume 103).
Xie, N., Ras, G., van Gerven, M., & Doran, D. (2020). Explainable Deep Learning: A Field Guide for the Uninitiated. Journal of Artificial Intelligence Research, 73, 329–39 |