English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 83776/83776 (100%)
造訪人次 : 58210816      線上人數 : 7170
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/97187


    題名: SafePath-Agent:基於 GUI 模型的安全執行路徑規劃的 AI 代理;SafePath-Agent: AI Agents with GUI Model-Based Safe Execution Path Planning
    作者: 黃子珊;Huang, Zi-Shan
    貢獻者: 人工智慧國際碩士學位學程
    關鍵詞: AI 代理人;機器人流程自動化;大語言模型;AI Agent;RPA;LLM
    日期: 2025-07-29
    上傳時間: 2025-10-17 10:56:42 (UTC+8)
    出版者: 國立中央大學
    摘要: 人工智慧代理人(Artificial Intelligence Agent, AI Agent)是當前科技發展中的熱門技術之一,指的是透過大語言模型(Large Language Models, LLMs)自主運作、思考,並代替人類執行各類操作的系統。雖然目前已有多家企業積極開發AI Agent,但在特定系統上的代操(代替人類操作)仍面臨兩大挑戰。首先,許多高敏感度或專業領域的系統資料無法在網路上取得,導致 LLM 難以透過常理或公開知識進行推論,進而無法有效規劃操作步驟。其次,多數現有的AI Agent在規劃任務時,會為了達成目標而不斷試錯,然而在工廠、醫療等高嚴謹環境中,這種嘗試可能造成危險後果。
    為此,本研究提出一項解決方案,運用軟體工程中常見的狀態機(state machine),首次將其系統性地應用於AI Agent的安全執行路徑規劃。透過這套設計,AI Agent 能在特定系統內,規劃出安全且可信賴的執行步驟。 整體架構設計上,擷取增強生成(Retrieval-Augmented Generation, RAG)扮演關鍵的知識補強角色,透過檢索相關文件片段為AI Agent提供特定領域的專業知識基礎,解決模型在專業領域知識不足的問題。GUI Model則為AI Agent提供結構化的操作約束框架,確保代理操作在預定義的安全邊界內進行。
    實驗中亦比較多個 LLM 模型的表現,結果顯示高階語言模型已具備在一定程度上規劃安全執行流程的能力。整體而言,研究成果首次建立了AI Agent運用圖形用戶介面(Graphical User Interface, GUI)模型進行安全代操的技術框架,在智慧化與安全性之間取得有效平衡,為AI Agent在高嚴謹系統下的代理操作奠定了基礎。
    ;Artificial Intelligence Agents (AI Agents) are one of the most prominent technologies in today′s technological landscape. They refer to systems that operate autonomously and perform tasks on behalf of humans by leveraging Large Language Models (LLMs). While many companies are actively developing AI Agents, two major challenges remain in deploying them for automated operations in specific systems.
    First, many high-sensitivity or specialized systems contain data that is not accessible via the internet, making it difficult for LLMs to infer correct actions based on common sense or publicly available knowledge. As a result, they often struggle to plan operational steps effectively. Second, most existing AI Agents rely on trial-and-error approaches to complete tasks. However, in high-stakes environments such as manufacturing or healthcare, such exploratory behavior can lead to dangerous consequences.
    To address these challenges, this study proposes a novel solution by incorporating state machines—a common software engineering technique—into AI Agent safety execution path planning for the first time in a systematic manner. Through this design, AI Agents can plan safe and reliable sequences of actions within specific systems.
    In the overall architecture, Retrieval-Augmented Generation (RAG) plays a critical role in enhancing domain-specific knowledge. By retrieving relevant document snippets, RAG supplements the AI Agent with expert-level understanding in specialized fields, thereby mitigating the limitations of LLMs in such domains. Meanwhile, the GUI Model provides a structured operational constraint framework that ensures agent actions are confined within predefined safety boundaries.
    Experiments also compared the performance of various LLMs, showing that advanced models already possess a certain level of capability in planning safe execution processes. Overall, this study establishes the first technical framework for safe autonomous operation by AI Agents through Graphical User Interface (GUI) Models. It achieves an effective balance between intelligence and safety, laying a solid foundation for AI Agent deployment in high-precision and high-reliability systems.
    顯示於類別:[人工智慧國際碩士學位學程] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML11檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明