參考文獻 |
Salas-Pilco, S. Z., Xiao, K., & Hu, X. (2022). Artificial intelligence and learning
analytics in teacher education: A systematic review. Education Sciences, 12(8),
569.
Abiodun, O. I., Jantan, A., Omolara, A. E., Dada, K. V., Mohamed, N. A., & Arshad,
H. (2018). State-of-the-art in artificial neural network applications: A survey.
Heliyon, 4(11), e00938.
Agarwal, C., Johnson, N., Pawelczyk, M., Krishna, S., Saxena, E., Zitnik, M., &
Lakkaraju, H. (2022). Rethinking stability for attribution-based explanations.
arXiv preprint arXiv:2203.06877.
Agarwal, C., Krishna, S., Saxena, E., Pawelczyk, M., Johnson, N., Puri, I., Zitnik, M.,
& Lakkaraju, H. (2022). Openxai: Towards a transparent evaluation of model
explanations. Advances in neural information processing systems, 35, 15784-
15799.
Albreiki, B., Zaki, N., & Alashwal, H. (2021). A systematic literature review of
student’performance prediction using machine learning techniques. Education
Sciences, 11(9), 552.
Alvarez Melis, D., & Jaakkola, T. (2018). Towards robust interpretability with selfexplaining
neural networks. Advances in neural information processing
systems, 31.
Alwarthan, S., Aslam, N., & Khan, I. U. (2022). An explainable model for identifying
at-risk student at higher education. Ieee Access, 10, 107649-107668.
Ancona, M., Ceolini, E., Öztireli, C., & Gross, M. (2017). Towards better
understanding of gradient-based attribution methods for deep neural networks.
arXiv preprint arXiv:1711.06104.
Arias-Duart, A., Parés, F., Garcia-Gasulla, D., & Giménez-Ábalos, V. (2022). Focus!
Rating XAI Methods and Finding Biases. 2022 IEEE International Conference
on Fuzzy Systems (FUZZ-IEEE),
Arras, L., Osman, A., & Samek, W. (2020). Ground truth evaluation of neural network
explanations with clevr-xai. arXiv preprint arXiv:2003.07258.
Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S. C.,
Houde, S., Liao, Q. V., Luss, R., & Mojsilović, A. (2019). One explanation
does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv
preprint arXiv:1909.03012.
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W.
(2015). On pixel-wise explanations for non-linear classifier decisions by layerwise
relevance propagation. PloS one, 10(7), e0130140.
Baker, R. S. (2019). Challenges for the future of educational data mining: The Baker
learning analytics prizes. Journal of Educational Data Mining, 11(1), 1-17.
Bhatt, U., Weller, A., & Moura, J. M. (2020). Evaluating and aggregating featurebased
model explanations. arXiv preprint arXiv:2005.00631.
Cahour, B., & Forzy, J.-F. (2009). Does projection into use improve trust and
exploration? An example with a cruise control system. Safety science, 47(9),
1260-1270.
Chalasani, P., Chen, J., Chowdhury, A. R., Wu, X., & Jha, S. (2020). Concise
explanations of neural networks using adversarial training. International
Conference on Machine Learning,
Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review.
Ieee Access, 8, 75264-75278.
Chui, K. T., Fung, D. C. L., Lytras, M. D., & Lam, T. M. (2020). Predicting at-risk
university students in a virtual learning environment via a machine learning
algorithm. Computers in Human Behavior, 107, 105584.
Collins, R. P., Litman, J. A., & Spielberger, C. D. (2004). The measurement of
perceptual curiosity. Personality and individual differences, 36(5), 1127-1141.
Das, A., & Rad, P. (2020). Opportunities and challenges in explainable artificial
intelligence (xai): A survey. arXiv preprint arXiv:2006.11371.
Dasgupta, S., Frost, N., & Moshkovitz, M. (2022). Framework for evaluating
faithfulness of local explanations. International Conference on Machine
Learning,
Došilović, F. K., Brčić, M., & Hlupić, N. (2018). Explainable artificial intelligence: A
survey. 2018 41st International convention on information and communication
technology, electronics and microelectronics (MIPRO),
Dwivedi, R., Dave, D., Naik, H., Singhal, S., Omer, R., Patel, P., Qian, B., Wen, Z.,
Shah, T., & Morgan, G. (2023). Explainable AI (XAI): Core ideas, techniques,
and solutions. ACM Computing Surveys, 55(9), 1-33.
Eckles, J. E., & Stradley, E. G. (2012). A social network analysis of student retention
using archival data. Social Psychology of Education, 15, 165-180.
Ferri, C., Hernández-Orallo, J., & Modroiu, R. (2009). An experimental comparison
of performance measures for classification. Pattern recognition letters, 30(1),
27-38.
Flanagan, B., & Ogata, H. (2017). Integration of learning analytics research and
production systems while protecting privacy. The 25th International
Conference on Computers in Education, Christchurch, New Zealand,
Gaikwad, S. K., Gawali, B. W., & Yannawar, P. (2010). A review on speech
recognition technique. International Journal of Computer Applications, 10(3),16-24.
Gao, J. (2014). Machine learning applications for data center optimization.
Guresen, E., & Kayakutlu, G. (2011). Definition of artificial neural networks with
comparison to other networks. Procedia Computer Science, 3, 426-433.
Hart, S. G. (2006). NASA-task load index (NASA-TLX); 20 years later. Proceedings
of the human factors and ergonomics society annual meeting,
Hedström, A., Weber, L., Krakowczyk, D., Bareeva, D., Motzkus, F., Samek, W.,
Lapuschkin, S., & Höhne, M. M.-C. (2023). Quantus: An Explainable AI
Toolkit for Responsible Evaluation of Neural Network Explanations and
Beyond. Journal of Machine Learning Research, 24(34), 1-11.
Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for
explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.
Holzinger, A., Saranti, A., Molnar, C., Biecek, P., & Samek, W. (2022). Explainable
AI methods-a brief overview. xxAI-Beyond Explainable AI: International
Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna,
Austria, Revised and Extended Papers,
Hossin, M., & Sulaiman, M. N. (2015). A review on evaluation metrics for data
classification evaluations. International journal of data mining & knowledge
management process, 5(2), 1.
Hsiao, J. H.-w., Ngai, H. H. T., Qiu, L., Yang, Y., & Cao, C. C. (2021). Roadmap of
designing cognitive metrics for explainable artificial intelligence (XAI). arXiv
preprint arXiv:2108.01737.
Jain, A. K., Mao, J., & Mohiuddin, K. M. (1996). Artificial neural networks: A
tutorial. Computer, 29(3), 31-44.
Jang, Y., Choi, S., Jung, H., & Kim, H. (2022). Practical early prediction of students’
performance using machine learning and eXplainable AI. Education and
Information Technologies, 1-35.
Jayalakshmi, T., & Santhakumaran, A. (2011). Statistical normalization and back
propagation for classification. International Journal of Computer Theory and
Engineering, 3(1), 1793-8201.
Jones, L. (2017). Driverless cars: when and where? Engineering & Technology, 12(2),
36-40.
Kashdan, T. B., Gallagher, M. W., Silvia, P. J., Winterstein, B. P., Breen, W. E., Terhar,
D., & Steger, M. F. (2009). The curiosity and exploration inventory-II:
Development, factor structure, and psychometrics. Journal of research in
personality, 43(6), 987-998.
Katarya, R., Gaba, J., Garg, A., & Verma, V. (2021). A review on machine learning
based student’s academic performance prediction systems. 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS),
Khosravi, H., Shum, S. B., Chen, G., Conati, C., Tsai, Y.-S., Kay, J., Knight, S.,
Martinez-Maldonado, R., Sadiq, S., & Gašević, D. (2022). Explainable
artificial intelligence in education. Computers and Education: Artificial
Intelligence, 3, 100074.
Kohlbrenner, M., Bauer, A., Nakajima, S., Binder, A., Samek, W., & Lapuschkin, S.
(2020). Towards best practice in explaining neural network decisions with
LRP. 2020 International Joint Conference on Neural Networks (IJCNN),
Krishna, S., Gupta, R., Verma, A., Dhamala, J., Pruksachatkun, Y., & Chang, K.-W.
(2022). Measuring fairness of text classifiers via prediction sensitivity. arXiv
preprint arXiv:2203.08670.
Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., & Lakkaraju, H. (2022).
The disagreement problem in explainable machine learning: A practitioner′s
perspective. arXiv preprint arXiv:2202.01602.
Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the
concept of interpretability is both important and slippery. Queue, 16(3), 31-57.
Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., & Alsaadi, F. E. (2017). A survey of
deep neural network architectures and their applications. Neurocomputing,
234, 11-26.
Lu, O. H., Huang, A. Y., Huang, J. C., Huang, C. S., & Yang, S. J. (2016). Early-Stage
Engagement: Applying Big Data Analytics on Collaborative Learning
Environment for Measuring Learners′ Engagement Rate. 2016 International
Conference on Educational Innovation through Technology (EITT),
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model
predictions. Advances in neural information processing systems, 30.
Marbouti, F., Diefes-Dux, H. A., & Madhavan, K. (2016). Models for early prediction
of at-risk students in a course using standards-based grading. Computers &
Education, 103, 1-15.
Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability in
creating trustworthy artificial intelligence for health care: a comprehensive
survey of the terminology, design choices, and evaluation strategies. Journal
of Biomedical Informatics, 113, 103655.
Merlo, J., Chaix, B., Ohlsson, H., Beckman, A., Johnell, K., Hjerpe, P., Råstam, L., &
Larsen, K. (2006). A brief conceptual tutorial of multilevel analysis in social
epidemiology: using measures of clustering in multilevel logistic regression to
investigate contextual phenomena. Journal of Epidemiology & Community
Health, 60(4), 290-297.
Merritt, S. M. (2011). Affective processes in human–automation interactions. Human Factors, 53(4), 356-370.
Montavon, G., Samek, W., & Müller, K.-R. (2018). Methods for interpreting and
understanding deep neural networks. Digital signal processing, 73, 1-15.
Mood, C. (2010). Logistic regression: Why we cannot do what we think we can do,
and what we can do about it. European sociological review, 26(1), 67-82.
Nguyen, A.-p., & Martínez, M. R. (2020). On quantitative aspects of model
interpretability. arXiv preprint arXiv:2007.07584.
O′shea, T. J., & West, N. (2016). Radio machine learning dataset generation with gnu
radio. Proceedings of the GNU Radio Conference,
Ogata, H., Oi, M., Mohri, K., Okubo, F., Shimada, A., Yamada, M., Wang, J., &
Hirokawa, S. (2017). Learning analytics for e-book-based educational big data
in higher education. Smart sensors at the IoT frontier, 327-350.
Ogata, H., Yin, C., Oi, M., Okubo, F., Shimada, A., Kojima, K., & Yamada, M.
(2015). E-Book-based learning analytics in university education. International
conference on computer in education (ICCE 2015),
Oqaidi, K., Aouhassi, S., & Mansouri, K. (2022). Towards a Students’ Dropout
Prediction Model in Higher Education Institutions Using Machine Learning
Algorithms. International Journal of Emerging Technologies in Learning
(Online), 17(18), 103.
Osmanbegovic, E., & Suljic, M. (2012). Data mining approach for predicting student
performance. Economic Review: Journal of Economics and Business, 10(1), 3-
12.
Oxford, R. (1990). Language learning strategiesWhat every teacher should know.
Heinle & heinle Publishers.;.
Peng, C.-Y. J., Lee, K. L., & Ingersoll, G. M. (2002). An introduction to logistic
regression analysis and reporting. The journal of educational research, 96(1),
3-14.
Pintrich, P. R. (1991). A manual for the use of the Motivated Strategies for Learning
Questionnaire (MSLQ).
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should i trust you?"
Explaining the predictions of any classifier. Proceedings of the 22nd ACM
SIGKDD international conference on knowledge discovery and data mining,
Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision modelagnostic
explanations. Proceedings of the AAAI conference on artificial
intelligence,
Rieger, L., & Hansen, L. K. (2020). Irof: a low resource evaluation metric for
explanation methods. arXiv preprint arXiv:2003.08747.
Romero, C., & Ventura, S. (2013). Data mining in education. Wiley Interdisciplinary Reviews: Data mining and knowledge discovery, 3(1), 12-27.
Rong, Y., Leemann, T., Borisov, V., Kasneci, G., & Kasneci, E. (2022). A consistent
and efficient evaluation strategy for attribution methods. arXiv preprint
arXiv:2202.00449.
Rosenfeld, A. (2021). Better metrics for evaluating explainable artificial intelligence.
Proceedings of the 20th international conference on autonomous agents and
multiagent systems,
Samek, W., Binder, A., Montavon, G., Lapuschkin, S., & Müller, K.-R. (2016).
Evaluating the visualization of what a deep neural network has learned. IEEE
transactions on neural networks and learning systems, 28(11), 2660-2673.
Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., & Müller, K.-R. (2019).
Explainable AI: interpreting, explaining and visualizing deep learning (Vol.
11700). Springer Nature.
Schneider, K.-M. (2003). A comparison of event models for naive bayes anti-spam email
filtering. 10th Conference of the European Chapter of the Association for
Computational Linguistics,
Shapley, L. S. (1953). A value for n-person games.
Singh, D., & Singh, B. (2020). Investigating the impact of data normalization on
classification performance. Applied Soft Computing, 97, 105524.
Sperandei, S. (2014). Understanding logistic regression analysis. Biochemia medica,
24(1), 12-18.
Theiner, J., Müller-Budack, E., & Ewerth, R. (2022). Interpretable semantic photo
geolocation. Proceedings of the IEEE/CVF Winter Conference on
Applications of Computer Vision,
Tomasevic, N., Gvozdenovic, N., & Vranes, S. (2020). An overview and comparison
of supervised data mining techniques for student exam performance
prediction. Computers & Education, 143, 103676.
van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating
XAI: A comparison of rule-based and example-based explanations. Artificial
Intelligence, 291, 103404.
Wolff, A., Zdrahal, Z., Nikolov, A., & Pantucek, M. (2013). Improving retention:
predicting at-risk students by analysing clicking behaviour in a virtual learning
environment. Proceedings of the third international conference on learning
analytics and knowledge,
Yeh, C.-K., Hsieh, C.-Y., Suggala, A., Inouye, D. I., & Ravikumar, P. K. (2019). On
the (in) fidelity and sensitivity of explanations. Advances in neural
information processing systems, 32.
Zhang, J., Bargal, S. A., Lin, Z., Brandt, J., Shen, X., & Sclaroff, S. (2018). Top-down neural attention by excitation backprop. International Journal of Computer
Vision, 126(10), 1084-1102.
Zhou, J., Gandomi, A. H., Chen, F., & Holzinger, A. (2021). Evaluating the quality of
machine learning explanations: A survey on methods and metrics. Electronics,
10(5), 593.
Zitnik, M., Nguyen, F., Wang, B., Leskovec, J., Goldenberg, A., & Hoffman, M. M.
(2019). Machine learning for integrating data in biology and medicine:
Principles, practice, and opportunities. Information Fusion, 50, 71-91.
Braunstein, A. W., Lesser, M. N., & Pescatrice, D. R. (2008). The Impact of a
Program for the Disadvantaged on Student Retention. College Student Journal,
42(1).
Eckles, J. E., & Stradley, E. G. (2012). A social network analysis of student retention
using archival data. Social Psychology of Education, 15(2), 165-180.
Hendel, D. D. (2007). Efficacy of participating in a first-year seminar on student
satisfaction and retention. Journal of College Student Retention: Research,
Theory & Practice, 8(4), 413-423.
Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W. (2005). Applied linear statistical
models. In: McGraw-Hill New York.
Cremer, C. Z. (2021). Deep limitations? Examining expert disagreement over deep
learning. Progress in Artificial Intelligence, 10, 449-464.
O. H.T. Lua, A. Y.Q. Huang, B. Flanagan, H. Ogata, and S. J.H. Yang, “A Quality
Data Set for Data Challenge: Featuring 160 Students’ Learning Behaviors and
Learning Strategies in a Programming Course,” Asia-Pacific Society for
Computers in Education, vol. 30, p. 10, 2022. |