English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41625806      線上人數 : 1956
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/84081


    題名: A Multi-Agent Reinforcement Learning Framework for Datacenter Traffic Optimization
    作者: 李岱龍;Lee, Dai-Long
    貢獻者: 資訊工程學系
    關鍵詞: 多代理人;強化學習;資料中心;流量控制;Multi-Agent;Reinforcement Learning;Datacenter;Traffic Control
    日期: 2020-07-30
    上傳時間: 2020-09-02 18:02:07 (UTC+8)
    出版者: 國立中央大學
    摘要: 多年來,資料中心的網路流量優化一直是一項熱門的研究議題,傳統流量優化演算法主要為以資料中心管理者的經驗法則與對網路環境的理解為基礎來實作。然而,隨著現在網路環境越加複雜且快速變化,傳統演算法可能無法適當的處理其中的流量。近年隨著強化學習的蓬勃發展,有許多的相關研究證實使用強化學習應用於網路流量控制的可行性。本研究提出可應用於資料中心流量控制的多代理人的強化學習框架,我們設計常見的拓僕作為模擬環境。利用網路最佳化中經常使用的效用函數作為強化學習中代理人的獎勵函數,再透過深度神經網路讓代理人學習如何最大化獎勵函數,藉此找出最佳的網路控制策略,此外,為了加強代理人於環境中的探索效率,我們在代理人的策略網路參數加入噪聲造成擾動。我們的實驗結果顯示兩件事:1) 當代理人以簡單的深度網路架構實作時,本框架效能亦不會有所損失使 2) 本框架可以達到接近傳統演算法的表現且不需要傳統演算法的必要假設。;Datacenter traffic optimization has been a popular study domain for years. Traditional methods to this problem are mainly based on rules crafted with datacenter operators’ experience and knowledge to the network environment. However, the traffic in a modern datacenter tends to be more complicated and dynamic, which may cause traditional method to fail. With the booming development in deep reinforcement learning, a number of research works have proven to be feasible to adopt deep reinforcement learning in the domain of traffic control. In this research, we propose a multi-agent reinforcement learning framework that can be applied to the problem of datacenter traffic control. The simulation environment is carefully designed to consist of popular topologies. With the reward function based on utility function that is often used for traffic optimization, our agents learn an optimal traffic control policy by maximizing the reward function with the deep neural network. Additionally, to improve the exploration efficiency of our agents in the environment, noise is introduced to perturb parameters of the agent’s policy network. Our experimental results show two points: 1) The performance of our framework does not downgrade when agents are implemented with a simple network architecture. 2) The proposed framework performs nearly as well as popular traffic control schemes without assumptions that are required by those traffic control schemes.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML110檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明