博碩士論文 111522153 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:15 、訪客IP:3.140.184.21
姓名 謝禎璟(Zhen-Jing Xie?)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 AsyncGen: 結合AutoGen與Ray的非同步執行和分散式多代理對話框架
(AsyncGen: An Asynchronous Execution and Distributed Multi-Agent Conversation Framework Integrating AutoGen and Ray)
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2030-2-1以後開放)
摘要(中) 近期,隨著 ChatGPT 的問世,大型語言模型的開發逐漸蓬勃發展,但現實世界的應用往往較為複雜,無法由單一的 agent 完成,因此開始有人探索如何透過多個 agent協作以解決更複雜的任務,並由此衍生出各種多 agent 對話框架。其中,AutoGen 作為Microsoft 開發的框架,受到廣泛關注,其特色為使用者可以輕鬆自定義 agent 及任務,並且支援 Python 的協程 (coroutine) 並行處理 I/O 密集型任務。

然而,協程本質上是運行在單執行緒上,因此當執行的任務需要大量的計算資源時,無法利用多核 CPU 的效能,實現真正的平行執行,例如,在處理大規模數據時,這類任務往往會將數據分批 (Batch) 進行處理,而 AutoGen 的架構意味著這些任務必須sequential 的執行,導致處理時間增加,限制了其在真實場景中的效能與擴展性。

此外,由於 AutoGen 的執行模式受限於單執行緒架構,整個系統無法擴展到多台計算資源。例如,現代分散式系統中,可以將不同任務分散到多台機器上同時執行以充分利用資源。但 AutoGen 無法實現這樣的處理,進一步限制了其處理複雜任務的能力以及應用範圍。

AutoGen 系統缺乏容錯能力。由於系統為單執行緒運行,一旦執行緒執行過程發生錯誤或未捕獲的例外錯誤,整個系統將立刻停止運行,導致運行中斷,缺乏穩定性和可用性,讓使用 LLM 服務所投入的資源和成本付諸流水。

因此,為了應對上述挑戰,本研究提出了 AsyncGen,一種基於分散式系統改進的多 agent 對話框架,將 AutoGen 和分散式系統框架 Ray 相結合,透過 Ray 的資源調度,將計算任務分散至多台節點進行平行處理。實驗結果顯示,經改進的框架得以在多台機器上同時運行,顯著降低了執行時間,提升了容錯性和擴展能力,展現了基於分散式系統擴展多 agent 框架的潛力與價值。
摘要(英) Recently, with the emergence of ChatGPT, the development of large language models has been flourishing. However, real-world applications are often more complex and cannot be han-
dled by a single agent alone. As a result, there has been growing interest in exploring how multiple agents can collaborate to solve more complex tasks, leading to the creation of various multi-agent conversation frameworks. Among them, AutoGen, developed by Microsoft, has garnered significant attention for allowing users to easily customize agents and tasks. It also supports Python coroutines, enabling concurrent handling of I/O-intensive tasks.

However, coroutines inherently run on a single thread. When tasks require substantial computational resources, they cannot leverage the performance of multi-core CPUs to achieve true parallel execution. For instance, in processing large-scale data, tasks are often divided into batches, but AutoGen’s architecture necessitates sequential execution of these tasks, resulting in increased processing time and limiting its performance and scalability in real-world scenarios.

Moreover, AutoGen’s execution model is constrained by its single-threaded architecture, preventing the system from scaling across multiple computing resources. In modern distributed systems, tasks can be distributed across multiple machines to maximize resource utilization.

However, AutoGen is unable to implement such processing, further restricting its ability to handle complex tasks and its range of applications. Additionally, the AutoGen system lacks fault tolerance. Since it operates on a single thread,
any errors or uncaptured exceptions during execution will cause the entire system to halt immediately. This results in interruptions, undermining its stability and reliability, and wasting the resources and costs invested in using LLM services.

To address these challenges, this study proposes AsyncGen, an improved multi-agent conversation framework based on distributed systems. By integrating AutoGen with the distributed system framework Ray, AsyncGen utilizes Ray’s resource scheduling to distribute computational tasks across multiple nodes for parallel processing. Experimental results show that the improved framework can operate simultaneously across multiple machines, significantly reducing execution time, enhancing fault tolerance, and improving scalability, demonstrating the potential and value of extending multi-agent frameworks using distributed systems.
關鍵字(中) ★ 多代理對話框架
★ AutoGen
★ 分散式系統
★ Ray
關鍵字(英) ★ Multi-agent Conversation Framework
★ AutoGen
★ Distributed System
★ Ray
論文目次 摘要 - i
Abstract - ii
誌謝 - iv
目錄 - v
圖目錄 - viii
表目錄 - ix
一、緒論 - 1
1.1 研究背景 - 1
1.2 研究動機 - 2
1.3 問題定義 - 3
1.4 研究貢獻- 3
二、相關研究 - 5
2.1 LLM Agent與多Agent對話框架 - 5
2.2 基於分散式系統改善多Agent對話框架 - 6
三、解決方法 - 7
3.1 AsyncGen Overview - 7
3.1.1 GroupChatManager的角色調整 - 8
3.2 整合AutoGen的Agent與Ray的Actor - 9
3.2.1 Ray的Actor - 9
3.2.2 AutoGen的Agent - 10
3.2.3 整合Agent和Actor - 10
3.3 AsyncGen 工作流程 - 12
3.4 參數設置 - 13
3.5 備份與還原機制 - 14
3.6 歷史對話記錄的同步 - 14
3.7 AsyncGen 使用方法 - 15
3.7.1 啟動Ray Cluster - 15
3.7.2 創建 Agent - 15
3.7.3 自定義發言順序 -16
3.7.4 建立GroupChat與GroupChatManager - 17
3.7.5 開始進行對話 - 18
四、實驗與結果討論 - 19
4.1 實驗設計 -19
4.1.1 實驗執行環境 - 19
4.1.2 資料集 - 19
4.1.3 實驗參數配置 - 21
4.1.4 實驗情境 - 21
4.2 實驗一 - 22
4.2.1 實驗流程 - 22
4.2.2 實驗結果 - 24
4.3 實驗二 - 25
4.3.1 實驗流程 - 25
4.3.2 實驗結果 - 25
4.4 實驗三 - 26
4.4.1 實驗流程 - 26
4.4.2 實驗結果 - 27
五、結果與未來展望 - 28
5.1 結論 - 28
5.2 AsyncGen的侷限與未來展望 - 28
5.2.1 AsyncGen的侷限 -28
5.2.2 未來展望 - 29
六、參考文獻 - 30
附錄A 裝置列表 - 31
附錄B 程式碼 - 32
B.1 Wrapper - 32
B.2 User proxy agent - 33
B.3 receive - 34
B.4 recovery_checkpoint - 34
B.5 llm_config -35
參考文獻 [1] T. Liang, Z. He, W. Jiao, X. Wang, Y. Wang, R. Wang, Y. Yang, S. Shi, and Z. Tu, “Encouraging divergent thinking in large language models through multi-agent debate,” arXiv preprint arXiv:2305.19118, 2023.
[2] G. Li, H. Hammoud, H. Itani, D. Khizbullin, and B. Ghanem, “Camel: Communicative agents for” mind” exploration of large language model society,” Advances in Neural Information Processing Systems, vol. 36, pp. 51991–52008, 2023.
[3] S. Hong, X. Zheng, J. Chen, Y. Cheng, J. Wang, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, et al., “Metagpt: Meta programming for multi-agent collaborative framework,” arXiv preprint arXiv:2308.00352, 2023.
[4] Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. Zhu, B. Li, L. Jiang, X. Zhang, and C. Wang, “AutoGen: Enabling next-gen llm applications via multi-agent conversation framework,” arXiv preprint arXiv:2308.08155, 2023.
[5] P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang, M. Elibol, Z. Yang, W. Paul, M. I. Jordan, et al., “Ray: A distributed framework for emerging {AI} applications,” in 13th USENIX symposium on operating systems design and implementation (OSDI 18), pp. 561–577, 2018.
[6] “LlamaIndex.” https://github.com/jerryjliu/llama_index, 2022.
[7] “LangChain.” https://github.com/langchain-ai/langchain, 2024.
[8] O. Khattab, A. Singhvi, P. Maheshwari, Z. Zhang, K. Santhanam, S. Vardhamanan, S. Haq, A. Sharma, T. T. Joshi, H. Moazam, et al., “Dspy: Compiling declarative language model calls into self-improving pipelines,” arXiv preprint arXiv:2310.03714, 2023.
[9] “CrewAI.” https://github.com/crewAIInc/crewAI, 2024.
[10] W. Chen, Z. You, R. Li, Y. Guan, C. Qian, C. Zhao, C. Yang, R. Xie, Z. Liu, and M. Sun, “Internet of agents: Weaving a web of heterogeneous agents for collaborative intelligence,” arXiv preprint arXiv:2407.07061, 2024.
[11] X. Tan, Y. Jiang, Y. Yang, and H. Xu, “Teola: Towards end-to-end optimization of llm-based applications,” arXiv preprint arXiv:2407.00326, 2024.
[12] L. Bonati, S. D’Oro, M. Polese, S. Basagni, and T. Melodia, “Intelligence and learning in o-ran for data-driven nextg cellular networks,” IEEE Communications Magazine, vol. 59, no. 10, pp. 21–27, 2021.
指導教授 林家瑜 郭志義(Chia-Yu Lin Ted T. Kuo?) 審核日期 2025-1-22
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明