| 摘要: | 智慧教學系統是一種模擬人類教師功能的數位學習支援工具,能根據學生的學習行為提供即時回饋與個別化引導,並減少教師於課堂中的負擔。隨著大型語言模型技術發展,生成式智慧教學助手開始展現出強大的語意理解與對話能力,其中,思考鏈提示詞架構進一步引導模型先進行推理後再生成回答,使其可以產生更具邏輯性與可解釋性的回應。本研究在CoSci線上科學模擬平台上開發了一套基於大型語言模型GPT-4o的智慧教學助手,並設計了兩種引導模式,其一為僅被動根據學生提問回應的被動型助手,其二為引入思考鏈機制,會根據學生的學習狀況主動引導學生的主動型助手。 本研究招募台灣北部一所高中的22位一年級學生,所有學生依序使用兩種智慧教學助手並搭配不同主題的科學模擬進行學習。在每次活動前後皆會進行對應的物理概念測驗,並記錄學生在活動中的操作紀錄及與智慧教學助手的互動記錄,以進一步分析學生在不同引導模式下的學習成效及行為差異。 研究結果顯示,學生在兩組活動中的概念測驗後測成績皆有顯著提升,且兩組活動中的進步幅度並無顯著差異,顯示兩種引導方式均有助於學生學習。但在學習歷程與互動方式上呈現明顯差異,在被動型情境中,整體互動過程由學生主導,互動模式呈現一問一答的形式,而在主動型情境中,整體互動則由智慧助手主導,智慧助手會依據學生的情況主動進行引導、提問及給予回饋等行為。相較於被動型助手,主動型助手更能促進學生的知識轉化,並將學生的注意力從模擬轉移至與智慧助手的互動中。進一步分析發現,主動型助手的思考鏈推理品質與學生學習成效為顯著正相關,顯示若智慧助手能正確推斷學生情境並選擇適當策略,將有助於學習成效提升。此外,本研究亦從學生與智慧助手的案例中進行質性分析,以了解智慧助手具體如何引導學生及思考鏈推理過程如何影響其推理方式,並在研究的最後提出對未來改進的建議。 ;Intelligent Tutoring Systems are digital learning support tools designed to simulate the role of human teachers. They provide students with real-time feedback and personalized guidance based on their behaviors, helping to reduce the workload of teachers in class. With recent advances of large language models, generative AI-based tutoring assistants have shown impressive capabilities in language understanding and engaging in meaningful dialogue. The Chain-of-Thought prompting framework further enhances these systems by guiding the model to reason before generating responses, resulting in replies that are more logical and explainable. In this study, we developed an intelligent tutoring assistant based on GPT-4o large language model on the CoSci online science simulation platform. Two types of guidance modes were designed: a passive assistant that only responds to students’ questions, and an active assistant enhanced with Chain-of-Thought framework that actively guides students based on their learning status. A total of 22 first-year high school students from northern Taiwan participated in the study. Each student sequentially used both types of assistants, paired with different science simulations. Before and after each activity, students completed corresponding physics concept tests. Their simulation logs and interactions with assistants were recorded for further analysis of learning outcomes and behavioral differences under different guidance modes. The results showed that students’ post-test scores significantly improved in both activities, with no significant difference in the magnitude of improvement between two modes, indicating that both guidance strategies effectively supported learning. However, their learning processes and interaction patterns differed notably. In passive mode, students led the interaction, typically in a question-answer format, while in the active mode, the interaction was led by the assistant, which actively asking questions, giving feedback, and guiding learning. The active assistant better supported knowledge transformation and shifted students′ focus from simulation to conversations with assistant. Further analysis revealed a significant positive correlation between the quality of the assistant’s Chain-of-Thought reasoning and students’ learning performance. This suggests that when the system can accurately assess a student’s situation and apply suitable strategies, it can significantly enhance learning outcomes. Additionally, qualitative case studies were conducted to reveal how the assistant guided students and how Chain-of-Thought reasoning influenced the assistant’s responses. Based on these findings, this study concludes with suggestions for future improvements. |