English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78852/78852 (100%)
造訪人次 : 36163003      線上人數 : 749
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/92804


    題名: 基於領域知識的強化式學習 TCP 壅塞控制方法;Reinforcement Learning TCP Congestion Control Method Based on Domain Knowledge
    作者: 黃柏學;Huang, Bo-Xue
    貢獻者: 資訊工程學系
    關鍵詞: 深度學習;強化式學習;壅塞控制;計算機網路;Deep Learning;Reinforcement Learning;Congestion Control;Computer Network
    日期: 2023-08-14
    上傳時間: 2023-10-04 16:11:01 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著網路技術的不斷發展,傳輸控制協定(Transmission Control Protocol,TCP)壅塞控制(congestion control)已成為網路性能優化的重要課題。近年來,強化式學習(Reinforcement Learning, RL)被廣泛應用於 TCP 壅塞控制的解決方案,其透過與環境互動並探索出一套最佳策略的方式讓它在資源調配的問題上表現亮眼。然而,由於深度學習的黑盒性質,純粹的強化式學習方案在面對未曾遭遇過的情境時可能產生非預期的行為。此外,過度頻繁地調用神經網路也會消耗大量運算資源,對網路設備造成負擔,所以對於強化式學習應用,勢必要考量到計算資源以及面臨陌生環境的應對。

    本論文提出了一種名為"Reinforcement learning method for Congestion control based on Shifting Gears (RCSG)" ,應用於 TCP 發送端的強化式學習壅塞控制機制,可以提高 RL 在陌生情境下的應對能力並減少神經網路的調用頻率。該機制以壅塞控制演算法為主,並輔以神經網路,以減少運算資源的消耗。並且本論文還採用了專門設計的前處理流程來降低神經網路在陌生情境下的非預期行為。

    實驗結果表明,RCSG 機制能夠在穩定的網路環境中分別讓 Reno 演算法和CUBIC 演算法最多減少 76.60%和 58.52%的排隊延遲,並且在不可靠的網路環境中最多可以分別提升 62.31%和 24.59%的吞吐量。並且即使遭遇到訓練中不曾遭遇過的情境時依舊能夠保持穩定的運作。該機制還能根據需求彈性地調整 AI 的控制頻率,若是需要更精確的控制能力可以選擇提高控制頻率,以更高的運算成本換取更好的性能,相對地如果計算資源有限的話則能夠調降控制頻率,透過減少部分強化式學習的控制能力來換取降低運算資源的消耗。;With the continuous development of network technology, TCP congestion control has become an important issue for network performance optimization. In recent years, Reinforcement Learning (RL) has been widely used in TCP Congestion Control solutions, which interacts with the environment and explores a set of optimal strategies. However, due to the black box feature of deep learning, pure reinforcement learning methods may take unexpected behaviors when facing unseen situations. In addition, frequent invocation of neural networks would consume amount of computing resources and cause a burden on network equipment. Therefore, for reinforcement learning solutions, it is necessary to consider computing resources allocation and strategies for unseen situations.

    This paper proposes an RL congestion control method called “Reinforcement Learning method for Congestion Control based on Shifting Gears (RCSG)” to improve the ability of TCP sender in unseen situations and reduce the called frequency of neural network. The method is mainly based on congestion control algorithms supported by neural networks to reduce computational resource consumption. In addition, this paper also has a specially designed preprocessing flow for reducing unexpected behavior of neural networks in unseen situations.

    The experimental results show that RCSG can reduce queuing delay of the Reno algorithm and the CUBIC algorithm by up to 76.60% and 58.52% in a stable network environment. In an unreliable network environment, it can increase the throughput by up to 62.31% and 24.59% respectively. Importantly, RCSG show stable performance even when encountering unseen scenarios. Furthermore, RCSG allows flexible adjustment of AI′s control frequency. Users can choose higher control frequencies for more precise control with higher computational costs, or lower control frequencies to conserve computational resources while sacrificing some degree of reinforcement learning control capability.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML53檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明