| 摘要: | 程式教育已被視為培養問題解決能力、創新思維與競爭力的核心基礎。然而,大班制教學常因師生比失衡而無法提供即時回饋與適性化教學。為回應此一挑戰,本研究研發 一套將自我解釋策略融入大型語言模型(LLM)的適性化系統(TaskInsighter),旨在協助教師在不額外增加批閱負擔的前提下,同步掌握學生的思考歷程並提升其程式能力。 TaskInsighter以自我解釋與Questions about Learners′ Code(QLC)理論為框架,先解析學生繳交的 Python 程式碼,生成與語意高度對應的開放式提問,引導學生針對程式問題進行自我解釋,並提供系統即時回饋,系統回饋包含整體評價與具體修訂建議,促成多輪修訂的迴圈。
 本研究採用準實驗設計,旨在評估將自我解釋聊天機器人整合於為期16週的「Python 教育資料探勘實作」,對學生學習成效的影響。研究對象為台灣北部某大學的碩博士生,依照班級進行實驗組與對照組之兩組前後重複量測,實驗組人數30人於繳交作業後進入整合「適性化提問、即時評分、多輪修訂」功能的TaskInsighter系統互動,對照組17人則在相同活動時間內,改以與組員互相討論並修訂作業內容,未使用系統,以比較系統介入的表現差異,資料蒐集施以運算思維、Python 能力測驗,以及學習動機與策略量表,並於後測以科技接受度問卷與開放式問卷蒐集實驗組對系統使用感受。
 結果顯示,與對照組相比,TaskInsighter能顯著促進程式能力,同時提升運算思維、學習動機與策略。質性回饋進一步指出,適性化提問與即時評分有效激發深層反思,並培養學生主動檢驗程式邏輯的習慣。能在適度回合內提出高品質自我解釋的學生獲益最大,反映解釋品質乃學習成效關鍵。
 TaskInsighter 透過自我解釋與適性化生成式AI,要求學生不只把程式寫對,還要對其內容解釋清楚。在十六週的大班教學裡,這種深度的適性化提問與即時回饋讓學生的Python成績與運算思維顯著成長,同時也提升了學生的學習動機,並培養了批判思考與複習等學習策略。系統的動態提問與即時評分不僅減輕教師負擔,更證明自我解釋與生成式AI的結合能成為程式教育促進深度學習的可行模式。
 ;Programming education has become a cornerstone for cultivating problem-solving skills, innovative thinking, and future competitiveness. However, in large classes, the imbalance between teacher and student numbers often prevents providing timely, individualized feedback. To address this challenge, we developed TaskInsighter, an adaptive system that embeds self-explanation strategies within a large language model. TaskInsighter enables instructors to monitor students’ reasoning in real-time—without adding to their grading workload—while simultaneously helping learners strengthen their coding proficiency.
 A quasi-experimental design evaluated the impact of integrating a self-explanation chatbot into a 16-week “Python for Educational Data Mining” course. Participants were master’s and doctoral students at a university in northern Taiwan. Two intact classes were assigned to experimental (n = 30) and control (n = 17) groups, with pre- and post-tests. After submitting each assignment, the experimental group interacted with TaskInsighter, which provided adaptive questioning, instantaneous scoring, and multi-round revision. During the same period, the control group engaged only in peer discussion and revision, without system support. Data were collected using a computational thinking scale, a Python proficiency test, and questionnaires assessing learning motivation and strategy. The post-test also included a technology acceptance survey and open-ended questions for the experimental group.
 Grounded in Self Explanation theory and the Questions about Learners’ Code (QLC) framework, TaskInsighter analyzes submitted Python code, generates semantically aligned open-ended prompts, guides learners to articulate their reasoning, and delivers real-time feedback—both holistic evaluations and concrete revision suggestions - to foster iterative improvement.
 Results revealed that, relative to the control group, TaskInsighter significantly enhanced programming ability, computational thinking, learning motivation, and learning strategies. Qualitative feedback indicated that adaptive prompts and instant scoring stimulated deeper reflection and nurtured habits of actively checking program logic. Learners who produced high-quality self-explanations within a reasonable number of rounds benefited the most, underscoring the importance of explanation quality as a critical factor in learning gains.
 By requiring students not only to write correct code but also to explain it clearly, TaskInsighter demonstrates that the fusion of self-explanation with adaptive generative AI can promote deep learning in large-scale programming courses. Over the course of sixteen weeks, the system’s dynamic questioning and immediate feedback significantly enhanced Python performance and computational thinking, increased motivation, and fostered strategies such as critical reflection and iterative review—all while alleviating teachers’ grading burden.
 |