博碩士論文 111522126 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator顏維新zh_TW
DC.creatorWei-Hsin Yenen_US
dc.date.accessioned2024-7-30T07:39:07Z
dc.date.available2024-7-30T07:39:07Z
dc.date.issued2024
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=111522126
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract隨著大型語言模型 (LLM) 技術的進步,LLM 已成為程式開發時的重要輔助工具。然而,LLM 在程式碼生成方面的準確性和可靠性仍面臨諸多挑戰。本論文旨在深入分析現今 LLM 在程式碼生成中的正確性,探討其實際應用中的限制,並提出新的解決方案以提高生成程式碼的準確性。 本論文提出了一種基於大型語言模型(LLM)的程式碼生成方法,名為JudgeCoder,採用了多代理人系統和鍊式思考(CoT)策略來增加程式碼生成的正確性。透過模擬小組開發程式碼的分工流程,分離了程式碼撰寫、測試資料撰寫及測試執行三件工作,減少了單一 LLM 模型因為分工不明確所可能導致的幻覺現象 (LLM Hallucination) 。並且提出了結合 CoT-SC (Chain of Thought with Self-Consistency) 想法的策略,進一步地針對因模型幻覺現象所產生的錯誤測試資料進行偵測,避免了因錯誤測試資料而導致進入錯誤修正流程的發生。在實驗中,JudgeCoder 展示了優良的性能,在HumanEval和HumanEval-ET的評估資料集上達到了最前沿的效能,說明了提案的投票機制搭配適當的提示策略和合理的錯誤判斷機制可以有效提升生成程式碼的準確性,這些結果不僅驗證了JudgeCoder的實用性,也為未來基於 LLM 的程式碼自動生成研究提供了一個應用的方向。zh_TW
dc.description.abstractWith the advancement of Large Language Models (LLMs), these models have become pivotal aids in software development. However, LLMs still face numerous challenges in terms of the accuracy and reliability of code generation. This paper aims to thoroughly analyze the correctness of current LLMs in code generation, explore their practical limitations, and propose solutions to enhance the accuracy of generated code. This paper introduces a code generation method based on LLMs, named JudgeCoder, which employs a multi-agent system and Chain of Thought (CoT) strategy to increase the correctness of code generation. By simulating the division of labor in team coding environments, the process separates code generation, test data generation, and test execution, thereby reducing the illusion phenomena often caused by unclear task division in a single LLM. Moreover, the paper presents a strategy combining Chain of Thought with Self-Consistency (CoT-SC), which further detects erroneous test data produced by model illusions, preventing the entry into incorrect correction processes. In experiments, JudgeCoder demonstrates good performance, achieving state-of-the-art results on the HumanEval and HumanEval-ET datasets. The results confirm that the proposed voting mechanism, coupled with appropriate prompting strategies and reasonable error judgment mechanisms, can effectively enhance the accuracy of generated code. These findings not only validate the practicality of JudgeCoder but also provide a directional framework for future research in LLM-based automatic code generation.en_US
DC.subject大型語言模型zh_TW
DC.subject程式碼生成zh_TW
DC.subjectChatGPTzh_TW
DC.subject鍊式思考zh_TW
DC.subject多代理人制度zh_TW
DC.subjectLLM 投票zh_TW
DC.subjectLLMen_US
DC.subjectCode Generationen_US
DC.subjectChatGPTen_US
DC.subjectChain-of-Thoughten_US
DC.subjectMulti- Agent Collaborationen_US
DC.subjectLLM Judgeen_US
DC.title於代理人制度下新增 LLM 投票單元提高生成程式碼正確性zh_TW
dc.language.isozh-TWzh-TW
DC.titleEnhancing Code Generation Accuracy through the Addition of LLM Judging Units in a Multi-Agent Systemen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明