中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95685
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 40304419      Online Users : 457
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/95685


    Title: 於代理人制度下新增 LLM 投票單元提高生成程式碼正確性;Enhancing Code Generation Accuracy through the Addition of LLM Judging Units in a Multi-Agent System
    Authors: 顏維新;Yen, Wei-Hsin
    Contributors: 資訊工程學系
    Keywords: 大型語言模型;程式碼生成;ChatGPT;鍊式思考;多代理人制度;LLM 投票;LLM;Code Generation;ChatGPT;Chain-of-Thought;Multi- Agent Collaboration;LLM Judge
    Date: 2024-07-30
    Issue Date: 2024-10-09 17:09:11 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 隨著大型語言模型 (LLM) 技術的進步,LLM 已成為程式開發時的重要輔助工具。然而,LLM 在程式碼生成方面的準確性和可靠性仍面臨諸多挑戰。本論文旨在深入分析現今 LLM 在程式碼生成中的正確性,探討其實際應用中的限制,並提出新的解決方案以提高生成程式碼的準確性。

    本論文提出了一種基於大型語言模型(LLM)的程式碼生成方法,名為JudgeCoder,採用了多代理人系統和鍊式思考(CoT)策略來增加程式碼生成的正確性。透過模擬小組開發程式碼的分工流程,分離了程式碼撰寫、測試資料撰寫及測試執行三件工作,減少了單一 LLM 模型因為分工不明確所可能導致的幻覺現象 (LLM Hallucination) 。並且提出了結合 CoT-SC (Chain of Thought with Self-Consistency) 想法的策略,進一步地針對因模型幻覺現象所產生的錯誤測試資料進行偵測,避免了因錯誤測試資料而導致進入錯誤修正流程的發生。在實驗中,JudgeCoder 展示了優良的性能,在HumanEval和HumanEval-ET的評估資料集上達到了最前沿的效能,說明了提案的投票機制搭配適當的提示策略和合理的錯誤判斷機制可以有效提升生成程式碼的準確性,這些結果不僅驗證了JudgeCoder的實用性,也為未來基於 LLM 的程式碼自動生成研究提供了一個應用的方向。;With the advancement of Large Language Models (LLMs), these models have become pivotal aids in software development. However, LLMs still face numerous challenges in terms of the accuracy and reliability of code generation. This paper aims to thoroughly analyze the correctness of current LLMs in code generation, explore their practical limitations, and propose solutions to enhance the accuracy of generated code.

    This paper introduces a code generation method based on LLMs, named JudgeCoder, which employs a multi-agent system and Chain of Thought (CoT) strategy to increase the correctness of code generation. By simulating the division of labor in team coding environments, the process separates code generation, test data generation, and test execution, thereby reducing the illusion phenomena often caused by unclear task division in a single LLM. Moreover, the paper presents a strategy combining Chain of Thought with Self-Consistency (CoT-SC), which further detects erroneous test data produced by model illusions, preventing the entry into incorrect correction processes. In experiments, JudgeCoder demonstrates good performance, achieving state-of-the-art results on the HumanEval and HumanEval-ET datasets. The results confirm that the proposed voting mechanism, coupled with appropriate prompting strategies and reasonable error judgment mechanisms, can effectively enhance the accuracy of generated code. These findings not only validate the practicality of JudgeCoder but also provide a directional framework for future research in LLM-based automatic code generation.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML16View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明