English  |  正體中文  |  简体中文  |  Items with full text/Total items : 70548/70548 (100%)
Visitors : 23240351      Online Users : 339
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/84064


    Title: Reinforcement Learning for Dynamic Channel Assignment Using Predicted Mobile Traffic
    Authors: 翁柏肯;Wongchamnan, Natpakan
    Contributors: 資訊工程學系
    Keywords: 強化學習;頻道指派;近端策略優化算法;Reinforcement learning;Channel assignment;Proximal Policy Optimization
    Date: 2020-07-29
    Issue Date: 2020-09-02 18:00:00 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 在蜂窩網絡中,經典的問題是信道分配,它為小區中的每個請求選擇要分配的信道。移動流量數據和移動設備的數量每年都在增長,但頻道數量有限。大多數作品將傳統的強化學習應用於信道分配而沒有預測的移動流量,並且與實際情況無關。
    另外,流量預測結果對於行動流量的動態性質也很好。 因此,我們提出一種強化學習,提出了一種針對動態信道分配的強化學習框架,該框架考慮了流量預測,旨在最大程度地降低服務阻塞概率。我們使用近端策略優化算法(PPO)模型將阻塞概率和信道利用率與傳統方法進行比較,在意大利米蘭的144個基地台中創建了1350個信道,並使用了2013年11月1日至2013年12月31日的移動流量數據,使用DCA算法和其他強化學習模型進行模擬。;In cellular networks, the classic problem is channel assignment, which selects a channel to allocate for each request in a cell. However, the mobile traffic data and the number of mobile devices grow up every year, but the number of channels is limited. Most works apply traditional reinforcement learning in channel assignment without predicted mobile traffic and do not concern with real situations. In addition, mobile traffic prediction result works well for the dynamic nature of mobile traffic. Hence, we present a reinforcement learning framework for dynamic channel assignment which takes into account the mobile traffic prediction, which aims at minimizing the service blocking probability. In the simulation, we make 144 base stations in Milano, Italy with 1350 channels and using Mobile traffic data from November 1, 2013, to December 31, 2013, using Proximal Policy Optimization (PPO) model to compare blocking probability and channel utilization with traditional DCA algorithm and others reinforcement learning models.
    Appears in Collections:[資訊工程研究所] 博碩士論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML22View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明