博碩士論文 109523043 完整後設資料紀錄

DC 欄位 語言
DC.contributor通訊工程學系zh_TW
DC.creator張佑祺zh_TW
DC.creatorYuChi Changen_US
dc.date.accessioned2022-9-12T07:39:07Z
dc.date.available2022-9-12T07:39:07Z
dc.date.issued2022
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=109523043
dc.contributor.department通訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract隨著全球的網際網路流量以及物聯網裝置的增加,現代的網路應用對於延遲時間、遺失率、吞吐量提出了更高的要求。近年來,為了滿足網路應用的需求,發展出了一項新的傳輸層網路協定:QUIC,QUIC 結合了傳輸控制協定 (Transmission ControlProtocol, TCP) 和使用者資料報協定 (User Datagram Protocol, UDP) 的優勢,在大幅 降低延遲的同時,保有高度的可靠性,但作為一個新協定,QUIC 在流量控制 (Flow Control, FC) 方面的研究尚未發展成熟,導致吞吐量以及延遲等效能指標受到顯著的限制。 近期的研究以基於規則 (rule-based) 的原則來設計 QUIC 的流量控制機制,因此無法很好地適應廣泛的網路環境,並且無法在動態環境中調整其行為,而近年來有許多研究應用機器學習 (Machine Learning, ML) 來解決網路運營和管理中的各種問題,其中強化學習 (Reinforcement Learning, RL) 能夠在沒有先備知識的情況下,從經驗中學習如何與環境進行互動,並逐漸找到最佳的策略,因此能夠在變動的網路環境中,學習正確的流量控制策略,達到良好的傳輸性能。Deep Q-Learning (DQN) 作為其中一個常見的強化學習模型,能夠有效處理高維度狀態空間並且可以解決資料的關聯性問題,提升了演算法的穩定性。基於上述問題,本研究提出一套 QUIC 流量控制方法:FC-DQN,FC-DQN 透過 DQN 強化學習模型來提取端到端 (end-to-end) 的網路特徵,以此來選擇適當的流量控制視窗,使其能夠穩定、快速的學習最佳的流量控制策略。此外,由於 FC-DQN 能根據環境進行動態的規則控制,因此可以適應動態和各種不同的網路場景,實驗結果表明,FC-DQN 的性能優於傳統基於規則的流量控制方法,在保持低遺失率的同時,能夠降低封包的傳輸延遲。zh_TW
dc.description.abstractWith the increase in global internet traffic, modern network applications have higher requirements for latency, packet loss rates, and throughput. To meet the needs of network applications, a new transport layer network protocol called QUIC has been proposed. It combines the advantages of Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). It significantly reduces the delay while maintaining a high degree of reliability. As a new protocol, the research on QUIC flow control has not yet matured, resulting in significant limitations on performance metrics such as delay and throughput. Recent research designs QUIC flow control mechanism based on rule-based principles. Therefore, it cannot adapt well to a wide range of network environments, and it cannot adjust its behavior in dynamic environments. In recent years, many studies applying machine learning to solve various problems in network operation and management. As a type of machine learning, reinforcement learning can learn how to interact with the environment without prior knowledge and gradually find the best policy. Thus, it can learn the correct flow control strategy and achieve better transmission performance in the dynamic network environment. Deep Q-Learning (DQN) is a common reinforcement learning model. It can effectively deal with high-dimensional state space and solve the problem of data correlation, which can make the algorithm more stable. In this paper, we propose a QUIC flow control mechanism called FC-DQN. It can select the appropriate flow control window from the end-to-end network characteristics with DQN reinforcement learning model. Since FC-DQN can accomplish dynamic rule control according to the environment, it can adapt to dynamic and various network scenarios. We show that FC-DQN outperforms the traditional rule-based QUIC flow control mechanisms, and can reduce delay and packet loss rate.en_US
DC.subject流量控制zh_TW
DC.subjectflow controlen_US
DC.title基於DQN強化學習之自適應QUIC流量控制機制zh_TW
dc.language.isozh-TWzh-TW
DC.titleAdaptive QUIC Flow Control Mechanism with Deep Q-Learningen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明