博碩士論文 111522064 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:154 、訪客IP:18.219.242.150
姓名 李敏嘉(Min-Jia Li)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 運用生成式AI提升學生學習策略與學習成效
(Utilizing Generative AI to Enhance Students′ Learning Strategies and Outcomes)
相關論文
★ 應用智慧分類法提升文章發佈效率於一企業之知識分享平台★ 家庭智能管控之研究與實作
★ 開放式監控影像管理系統之搜尋機制設計及驗證★ 資料探勘應用於呆滯料預警機制之建立
★ 探討問題解決模式下的學習行為分析★ 資訊系統與電子簽核流程之總管理資訊系統
★ 製造執行系統應用於半導體機台停機通知分析處理★ Apple Pay支付於iOS平台上之研究與實作
★ 應用集群分析探究學習模式對學習成效之影響★ 應用序列探勘分析影片瀏覽模式對學習成效的影響
★ 一個以服務品質為基礎的網際服務選擇最佳化方法★ 維基百科知識推薦系統對於使用e-Portfolio的學習者滿意度調查
★ 學生的學習動機、網路自我效能與系統滿意度之探討-以e-Portfolio為例★ 藉由在第二人生內使用自動對話代理人來改善英文學習成效
★ 合作式資訊搜尋對於學生個人網路搜尋能力與策略之影響★ 數位註記對學習者在線上學習環境中反思等級之影響
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2029-8-1以後開放)
摘要(中) 目前已有研究預測學生的學習成效,並利用可解釋人工智慧(Explainable AI, XAI)提供解釋。然而,可解釋人工智慧所提供的僅是特徵對結果的影響程度,並非直接給出因果解釋。這對非專業人士來說可能不易理解特徵與結果之間的關聯。過去的研究表明,學習回饋對學生有幫助,但為每位學生量身定制回饋成本較高,此時,生成式人工智慧(Generative AI, GenAI)提供了解決方案。本研究旨在運用生成式人工智慧為所有學生提供個人化的學習回饋建議。研究使用LBLS-516資料集訓練預測模型,並使用可解釋人工智慧說明特徵的影響程度,接著利用生成式人工智慧提供進一步解釋與改進建議。為了評估生成式人工智慧與可解釋人工智慧生成的解釋對於學生的有效性,本研究使用系統因果性量表(System Causability Scale, SCS)進行評量。結果顯示,生成式人工智慧更能展示學生學習行為策略與成效之間的因果關係,透過生成式人工智慧生成的改善建議確實能提升學生的學習成效、協助學生改善自我調解與學習程式的能力。藉由生成式人工智慧的幫助,除了能讓學生更了解其學習行為、策略與學習成效的因果關係,還能大量生成解釋,對於教師或學習分析團隊有著重要的幫助。
摘要(英) Currently, studies predict student learning outcomes and utilize explainable artificial intelligence (XAI) for explanations. However, XAI only provides the extent of features′ influence on results, rather than directly offering causal explanations. Non-experts may find it challenging to grasp the correlation between features and outcomes. Past research indicates that learning feedback benefits students, yet customizing feedback for each student is expensive. At present, generative AI (GenAI) offer a solution. This study aims to use GenAI to provide personalized learning feedback suggestions for all students. The study utilizes the LBLS-516 dataset to train the prediction model, employs XAI to demonstrate feature influence, and subsequently utilizes GenAI to provide additional explanations and improvement recommendations. To evaluate whether GenAI can enhance the explanatory power of XAI, this study uses the System Causability Scale (SCS). Findings indicate that GenAI can more effectively illustrate the causal relationship between students′ learning behavior strategies and their effectiveness. With GenAI assistance, students can better comprehend the causal links between their learning behaviors, strategies, and learning effectiveness. Additionally, GenAI can produce numerous explanations, providing valuable support to teachers or learning analysis teams.
關鍵字(中) ★ 生成式人工智慧
★ 可解釋人工智慧
★ 因果性
★ 學習成效
關鍵字(英) ★ Generative AI
★ Explainable AI
★ Causability
★ Learning Performance
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vii
表目錄 viii
1. 緒論 1
2. 文獻探討 3
2.1. 學習策略對程式語言學習的影響 3
2.2. 人工智慧在教育領域的應用與現況 4
2.3. 可解釋人工智慧發展現況 5
2.4. 可解釋性(Explainability)與因果性(Causability) 7
2.5. 生成式人工智慧於教育領域中的應用與現況 8
3. 研究方法 10
3.1. 實驗設計 10
3.1.1. 參與者 10
3.1.2. 實驗流程 10
3.1.3. 生成專屬學習分析與改善建議報告 13
3.2. LBLS-516資料集 13
3.3. 資料預處理 15
3.4. 建立預測模型 19
3.4.1. 決策樹 19
3.4.2. 隨機森林 20
3.4.3. 支援向量機 20
3.5. 模型效能評估 21
3.6. 生成解釋與改善建議 22
3.6.1. XAI-SHAP生成解釋 22
3.6.2. 運用生成式人工智慧產生進一步解釋與改善建議 23
3.7. 評估解釋的品質-因果性 24
4. 結果 26
4.1. 預測模型效能 26
4.2. 生成解釋與改善建議 27
4.3. 評估解釋的品質:因果性評估 29
4.4. 生成式人工智慧提供之改善建議對學生學習策略的影響 31
4.5. 生成式人工智慧提供之改善建議對學習成效的影響 33
4.6. 案例分享 33
5. 討論 40
5.1. 研究問題一:生成式AI的解釋力是否優於可解釋AI? 40
5.2. 研究問題二:生成式AI所產生之改善建議能否改善學生的學習策略? 41
5.3. 研究問題三:生成式AI所產生之改善建議是否能提升學生的學習成效? 41
5.4. 議題討論 42
5.4.1. 不同學習策略面向的改善程度不同 42
5.4.2. 案例學生的學習策略對學習成效的影響 42
6. 總結 44
7. 未來研究與限制 45
8. 參考文獻 46
9. 附錄 52
9.1. MSLQ問卷題目類別與描述 52
9.2. SILL問卷題目類別與描述 56
9.3. 提示(prompt) 58
參考文獻 Abe, S., Tago, S., Yokoyama, K., Ogawa, M., Takei, T., Imoto, S., & Fuji, M. (2023). Explainable AI for Estimating Pathogenicity of Genetic Variants Using Large-Scale Knowledge Graphs. Cancers, 15(4), 1118.
Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., & Anadkat, S. (2023). Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
Adnan, M., Uddin, M. I., Khan, E., Alharithi, F. S., Amin, S., & Alzahrani, A. A. (2022). Earliest Possible Global and Local Interpretation of Students’ Performance in Virtual Learning Environment by Leveraging Explainable AI. IEEE Access, 10, 129843-129864.
Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99. https://doi.org/10.1016/j.inffus.2023.101805
Amin, F., & Mahmoud, M. (2022). Confusion Matrix in Binary Classification Problems: A Step-by-Step Tutorial. Journal of Engineering Research, 6(5), 0-0.
Ani, A. (2019). Positive feedback improves students’ psychological and physical learning outcomes. Indonesian Journal of Educational Studies Vol, 22(2).
Apley, D. W., & Zhu, J. (2020). Visualizing the effects of predictor variables in black box supervised learning models. Journal of the Royal Statistical Society Series B: Statistical Methodology, 82(4), 1059-1086.
Arachchi, S., Dias, K., Madanayake, R., Chong, E., & Gunawardana, K. (2014). A comparison between Evaluation of computer based testing and paper based testing for subjects in Computer Programming. International Journal of Software Engineering & Applications (IJSEA), 5(1).
Begosso, L. C., Begosso, L. R., Gonçalves, E. M., & Gonçalves, J. R. (2012). An approach for teaching algorithms and computer programming using Greenfoot and Python. 2012 Frontiers in Education Conference Proceedings,
Bergin, S., Reilly, R., & Traynor, D. (2005). Examining the role of self-regulated learning on introductory programming performance. Proceedings of the first international workshop on Computing education research,
Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7), 1145-1159.
Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32.
Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and Regression Trees.
Brodley, C. E., & Utgoff, P. E. (1995). Multivariate decision trees. Machine Learning, 19, 45-77.
Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability evaluation in industry, 189(194), 4-7.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., & Askell, A. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in higher education, 36(4), 395-407.
Carpenter, R., & Alloway, T. (2019). Computer versus paper-based testing: are they equivalent when it comes to working memory? Journal of psychoeducational assessment, 37(3), 382-394.
Cervantes, J., Garcia-Lamont, F., Rodríguez-Mazahua, L., & Lopez, A. (2020). A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing, 408, 189-215.
Chen, C.-H., Yang, S. J., Weng, J.-X., Ogata, H., & Su, C.-Y. (2021). Predicting at-risk university students based on their e-book reading behaviours by using machine learning classifiers. Australasian Journal of Educational Technology, 37(4), 130-144.
Chiu, T. K., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M. (2023). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence, 4, 100118.
Chou, C.-Y., Lai, K. R., Chao, P.-Y., Tseng, S.-F., & Liao, T.-Y. (2018). A negotiation-based adaptive learning system for regulating help-seeking behaviors. Computers & Education, 126, 115-128.
Chou, Y.-L., Moreira, C., Bruza, P., Ouyang, C., & Jorge, J. (2022). Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Information Fusion, 81, 59-83.
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20, 273-297.
Danko, M., & Dečman, M. (2019). The strategy inventory for second language learning: Tested, adapted, and validated in the slovenian higher education context. ESP Today, Journal of English for Specific Purposes At Tertiary Level, 7(2), 207-230.
Das, A., & Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371.
Elkins, S., Kochmar, E., Serban, I., & Cheung, J. C. (2023). How useful are educational questions generated by large language models? International Conference on Artificial Intelligence in Education,
Ergen, B., & Kanadlı, S. (2017). The effect of self-regulated learning strategies on academic achievement: A meta-analysis study. Eurasian Journal of Educational Research, 17(69), 55-74.
Evans, C. (2013). Making sense of assessment feedback in higher education. Review of educational research, 83(1), 70-120.
Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics, 1189-1232.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.
Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., Scardapane, S., Spinelli, I., Mahmud, M., & Hussain, A. (2024). Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation, 16(1), 45-74.
Herodotou, C., Rienties, B., Boroowa, A., Zdrahal, Z., & Hlosta, M. (2019). A large-scale implementation of predictive learning analytics in higher education: The teachers’ role and perspective. Educational Technology Research and Development, 67, 1273-1306.
Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: the system causability scale (SCS) comparing human and machine explanations. KI-Künstliche Intelligenz, 34(2), 193-198.
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data mining and knowledge discovery, 9(4), e1312.
Jang, Y., Choi, S., Jung, H., & Kim, H. (2022). Practical early prediction of students’ performance using machine learning and eXplainable AI. Education and Information Technologies, 27(9), 12855-12889.
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., & Hüllermeier, E. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and individual differences, 103, 102274. https://www.sciencedirect.com/science/article/abs/pii/S1041608023000195
https://www.sciencedirect.com/science/article/abs/pii/S1041608023000195?via%3Dihub
Kotsiantis, S. B. (2013). Decision trees: a recent overview. Artificial Intelligence Review, 39, 261-283.
Krening, S., Harrison, B., Feigh, K. M., Isbell, C. L., Riedl, M., & Thomaz, A. (2016). Learning from explanations using sentiment and advice in RL. IEEE Transactions on Cognitive and Developmental Systems, 9(1), 44-55.
López Zambrano, J., Lara Torralbo, J. A., & Romero Morales, C. (2021). Early prediction of student learning performance through data mining: A systematic review. Psicothema.
Li, S., Chen, J., Shen, Y., Chen, Z., Zhang, X., Li, Z., Wang, H., Qian, J., Peng, B., & Mao, Y. (2022). Explanations from large language models make small reasoners better. arXiv preprint arXiv:2210.06726.
Liffiton, M., Sheese, B. E., Savelka, J., & Denny, P. (2023). Codehelp: Using large language models with guardrails for scalable support in programming classes. Proceedings of the 23rd Koli Calling International Conference on Computing Education Research,
Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31-57.
Lu, O. H., Huang, A. Y., Huang, J. C., Huang, C. S., & Yang, S. J. (2016). Early-Stage Engagement: Applying Big Data Analytics on Collaborative Learning Environment for Measuring Learners′ Engagement Rate. 2016 International Conference on Educational Innovation through Technology (EITT),
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30.
M, H., & M.N, S. (2015). A Review on Evaluation Metrics for Data Classification Evaluations. International journal of data mining & knowledge management process, 5(2), 01-11. https://doi.org/10.5121/ijdkp.2015.5201
Marquès Puig, J. M., Daradoumis, T., Arguedas, M., & Calvet Liñan, L. (2022). Using a distributed systems laboratory to facilitate students′ cognitive, metacognitive and critical thinking strategy use. Journal of Computer Assisted Learning, 38(1), 209-222.
Minh, D., Wang, H. X., Li, Y. F., & Nguyen, T. N. (2022). Explainable artificial intelligence: a comprehensive review. Artificial Intelligence Review, 1-66.
Nagy, M., & Molontay, R. (2023). Interpretable Dropout Prediction: Towards XAI-Based Personalized Intervention. International Journal of Artificial Intelligence in Education, 1-27.
Namoun, A., & Alshanqiti, A. (2020). Predicting student performance using data mining and learning analytics techniques: A systematic literature review. Applied Sciences, 11(1), 237.
Ogata, H., Yin, C., Oi, M., Okubo, F., Shimada, A., Kojima, K., & Yamada, M. (2015). E-Book-based learning analytics in university education. International conference on computer in education (ICCE 2015),
Öqvist, M., & Nouri, J. (2018). Coding by hand or on the computer? Evaluating the effect of assessment mode on performance of students learning programming. Journal of Computers in Education, 5, 199-219.
Osmanbegović, E., Suljić, M., & Agić, H. (2014). Determining dominant factor for students performance prediction by using data mining classification algorithms. Tranzicija, 16(34), 147-158.
Ouatik, F., Erritali, M., Ouatik, F., & Jourhmane, M. (2022). Predicting student success using big data and machine learning algorithms. International Journal of Emerging Technologies in Learning (Online), 17(12), 236.
Oxford, R. (1990). Language learning strategiesWhat every teacher should know. Heinle & heinle Publishers.;.
Pankiewicz, M., & Baker, R. S. (2023). Large Language Models (GPT) for automating feedback on programming assignments. arXiv preprint arXiv:2307.00150.
Pek, R. Z., Özyer, S. T., Elhage, T., Özyer, T., & Alhajj, R. (2022). The role of machine learning in identifying students at-risk and minimizing failure. IEEE Access, 11, 1224-1243.
Pintrich, P. R. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ).
Porter, B., & Grippa, F. (2020). A platform for AI-enabled real-time feedback to promote digital collaboration. Sustainability, 12(24), 10243.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should i trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,
Ronanki, K., Cabrero-Daniel, B., Horkoff, J., & Berger, C. (2024). Requirements engineering using generative ai: Prompts and prompting patterns. In Generative AI for Effective Software Development (pp. 109-127). Springer.
Roussel, C., & Böhm, K. (2023). Geospatial xai: A review. ISPRS International Journal of Geo-Information, 12(9), 355.
Selvaraj, A. M., & Azman, H. (2020). Reframing the Effectiveness of Feedback in Improving Teaching and Learning Achievement. International Journal of Evaluation and Research in Education, 9(4), 1055-1062.
Shapley, L. S. (1953). A value for n-person games.
Singh, D., & Singh, B. (2020). Investigating the impact of data normalization on classification performance. Applied Soft Computing, 97, 105524.
Tan, B., Gan, Z., & Wu, Y. (2023). The measurement and early warning of daily financial stability index based on XGBoost and SHAP: Evidence from China. Expert Systems with Applications, 227, 120375.
Tomasevic, N., Gvozdenovic, N., & Vranes, S. (2020). An overview and comparison of supervised data mining techniques for student exam performance prediction. Computers & Education, 143, 103676.
Van der Velden, B. H., Kuijf, H. J., Gilhuijs, K. G., & Viergever, M. A. (2022). Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Medical Image Analysis, 79, 102470.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.
Wang, X., Zhang, L., & He, T. (2022). Learning performance prediction-based personalized feedback in online learning via machine learning. Sustainability, 14(13), 7654.
Wang, Y. (2021). When artificial intelligence meets educational leaders’ data-informed decision-making: A cautionary tale. Studies in Educational Evaluation, 69, 100872.
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382.
Wong, J., Baars, M., Davis, D., Van Der Zee, T., Houben, G.-J., & Paas, F. (2019). Supporting self-regulated learning in online learning environments and MOOCs: A systematic review. International Journal of Human–Computer Interaction, 35(4-5), 356-373.
Xu, W., Meng, J., Raja, S. K. S., Priya, M. P., & Kiruthiga Devi, M. (2023). Artificial intelligence in constructing personalized and accurate feedback systems for students. International Journal of Modeling, Simulation, and Scientific Computing, 14(01), 2341001.
Zheng, L. (2016). The effectiveness of self-regulated learning scaffolds on academic performance in computer-based learning environments: A meta-analysis. Asia Pacific Education Review, 17, 187-202.
指導教授 楊鎮華(Stephen J.H. Yang) 審核日期 2024-7-10
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明