隨著大語言模型(Large Language Models, LLMs)於程式教育領域 的應用日益廣泛,其生成程式碼與協作輔助的能力逐漸成為程式學習者 的重要工具。然而初學者在缺乏有效引導的情況下,常無法清楚表達模 組責任或撰寫具結構化的提示詞,導致模型輸出不易驗證與測試,程式 錯誤也難以發現與修正。為解決此問題,本研究提出一套結合責任驅動 設計(Responsibility-Driven Design, RDD)、測試驅動學習(Test-Driven Learning, TDL)與大語言模型輔助的學習方法,並設計一套以情境引導 問題為核心的學習系統,引導學習者在程式實作前釐清模組責任與驗收 標準,逐步完成設計與測試。本研究透過為期七週的教學實驗,結合前 後測驗、 程式碼分析、 問卷調查與提示語句比對等方式進行成效評估, 結果顯示實驗組在程式碼的正確性表現上顯著優於控制組。研究亦指出, 透過情境引導問題和測試先行的學習架構,有助於建立學習者與大語言 模型協作下的驗證思維,學習者能更明確地釐清需求並驗證模型生成的 程式碼是否符合預期,來提升程式設計過程中的正確性。;With the increasing integration of Large Language Models (LLMs) into programming education, their capabilities in code generation and collaborative assistance have become valuable tools for learners. However, without proper guidance, novice learners often struggle to articulate module responsibilities or write well-structured prompts, resulting in LLMgenerated code that are difficult to verify and test, making errors hard to detect and correct. To address this issue, this study proposes a learning approach that integrates Responsibility-Driven Design (RDD), Test-Driven Learning (TDL), and and LLM assistance. A scenario-based instructional system was developed to guide learners in clarifying module responsibilities and acceptance criteria before implementation, gradually supporting the process of design and testing. A seven-week teaching experiment was conducted, combining pre- and post-tests, code analysis, questionnaire surveys, and prompt comparison to evaluate learning effectiveness. The results showed that learners in the experimental group significantly outperformed those in the control group in terms of code correctness. The findings also indicate that the scenario-based prompting combined with a test-first learning framework fosters a verification mindset when collaborating with LLMs. Learners were better able to clarify requirements and validate whether the generated code met expectations, thereby improving the correctness of their programming process.