摘要: | 在ChatGPT推出後,其即時回應功能顯示了人工智慧在教育領域的潛力,特別是在解答疑難問題、提供學術建議和協助資料查找方面。然而,它在教育應用上仍面臨諸多挑戰,例如被動回覆、生成內容通用話,不夠具體、無法融入教師們的教學經驗等等。因此我們創建了名為 Educational Agent Crafting Tool 的教育代理人創建系統,讓教師們將自己的教學經驗融入自己的教育代理人。本研究著眼於改善與擴展EduACT系統,特別是針對聊天機器人的創建流程與教學任務補足。
首先,本研究設計了一個新的代理人創建方法(Agent Builder),此方法通過及時的聊天與互動引導,使教師能夠在引導之下一步一步的創建他的對話代理人。此外系統還支援了自動化的對話測試,大大提高了創建過程的效率和幫助創建者發現設計上的問題。其次,根據實驗發現系統中的代理人有11.3%的對話情境沒有適合的任務,因此我們設計了動態任務將沒有任務可選的對話情境減少至6.7%,動態任務的提示詞會根據每個代理人的任務目標而有所不同,從而生成更為精確和個性化的回覆給使用者。
為了評估選擇動態任務後的回覆是否更好,我們評估了不適合的任務所生成的回覆與兩種設計方法的動態任務所生成的回覆的差異。結果表明,人工設計的動態任務其回覆不足以處理不同代理人的對話情境,而生成的動態任務其回覆品質有64%比起不適合的任務更好或相等,甚至在16%的對話資料中,回覆品質能遠遠超過前述兩種任務,證明了動態任務在補足對話上的靈活性和有效性。
總之,本研究的貢獻在於為教育聊天機器人的設計與應用提供了一種新的方法論框架,這一框架強調了用戶友好性和系統靈活性的重要性,為未來在相關領域的研究和實踐提供了寶貴的經驗和參考。;Since the launch of ChatGPT, its real-time response capabilities have showcased the potential of artificial intelligence in the education sector, especially in answering questions, providing academic advice, and assisting with data retrieval. However, its application in education still faces many challenges, such as passive responses, generic generated content, lack of specificity, and inability to integrate into teachers′ teaching experiences. Therefore, we created an educational agent crafting system called the Educational Agent Crafting Tool (EduACT) to allow teachers to incorporate their teaching experiences into their own educational agents. This study focuses on improving and expanding the EduACT system, particularly the process of creating chatbots and supplementing teaching tasks.
Firstly, this study designed a new agent creation method (Agent Builder) that guides teachers through real-time chat and interaction, enabling them to create their conversational agents step-by-step. Additionally, the system supports automated dialogue testing, significantly improving the creation process′s efficiency and helping creators identify design issues. Secondly, according to experiments, 11.3% of dialogue scenarios in the system lacked suitable tasks, so we designed dynamic tasks to reduce the scenarios with no available tasks to 6.7%. The prompts for dynamic tasks vary based on each agent′s task goals, generating more precise and personalized responses for users.
To evaluate whether the responses after selecting dynamic tasks were better, we assessed the differences between responses generated by unsuitable tasks and those generated by dynamically designed tasks using two methods. The results indicated that manually designed dynamic tasks were insufficient to handle different agent dialogue scenarios. In contrast, dynamically generated tasks produced responses that were superior in 64% of cases compared to unsuitable tasks. In 16% of dialogue data, the response quality far exceeded the aforementioned tasks, demonstrating the flexibility and effectiveness of dynamic tasks in supplementing dialogues.
In conclusion, this study contributes a new methodological framework for designing and applying educational chatbots. This framework emphasizes the importance of user-friendliness and system flexibility, providing valuable experiences and references for future research and practice in related fields. |