中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/89710
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78852/78852 (100%)
Visitors : 38473089      Online Users : 2446
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/89710


    Title: 使用DQN深度學習演算法的O-RAN無線電單元頻寬使用率之負載平衡機制;Deep Q-Learning Based Load Balancing Mechanism for Efficient Bandwidth Usage on Radio Unit in O-RAN Architecture
    Authors: 周國斌;Chou, Kuo-Pin
    Contributors: 通訊工程學系
    Keywords: 負載平衡;頻寬資源分配;強化學習;O-RAN;邊緣計算;工業物聯網;行動計算;Load Balancing;Bandwidth Resource Allocation;Reinforcement Learning;O-RAN;Edge Computing;Industrial IoT;Mobile Computing
    Date: 2022-09-20
    Issue Date: 2022-10-04 11:53:43 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 在現今工業物聯網(IIoT) 場域中,IIoT 設備之大幅增加,導致網路流量需求逐年上升,因而使其之控管變得極為不易。有鑑於此,O-RAN 之網路架構的設計恰好可有效解決此問題,相較於傳統的無線接取網路(RAN),O-RAN 將網路進行模組化之優點可讓網路供應商根據不同的環境場域進行相應的網路架構之設計,但由於O-RAN 架構中會有數個基地台與網路設備間彼此互連,其網路流量具有龐大、多變與複雜等特性且每一網路設備皆需要不同之服務品質需求,因而較難以傳統之演算法來進行控制,導致基地台間的頻寬使用率分配不均。本研究將採用強化學習中的DQN 模型來解決此問題,此模型可透過與環境的互動來對輸出結果進行動態與持續性的調整,而本研究將此特性用以針對IIoT 設備之服務品質需求與相應的O-RAN 基地台來進行連線配對,藉此來控制每一基地台之頻寬使用率,從而在滿足服務品質需求的條件下達到負載平衡之目標。而於最終的實驗結果顯示,本研究所提出之結合DQN 之ORAN 架構在相同連線環境的狀態下,各基地台的資源塊使用率分配相較於傳統上以節省能源以及節省頻寬使用為主的演算法較為平衡,以防止高流量所造成的延遲發生。在設備服務率方面,當設備數目上升時,本研究之方法也能妥善分配資源塊使用,使設備服務率下降的程度相較其他方法來的平緩。;In Industrial Internet of Things (IIoT) field, the substantial increase of IIoT devices has led to an increase in network traffic and difficulty to management. The O-RAN architecture can effectively solve this problem. Compared with the traditional Radio Access Network (RAN), the advantages of O-RAN modularization of the network let the network vendors design corresponding network architectures for different environment . However, in O-RAN architecture, there will be several base stations interconnected with network equipment. The characteristics of variability and complexity, and each network device requires different QoS requirements, so it is more difficult to control with traditional algorithms, resulting in uneven allocation of bandwidth utilization among base stations. This research will use the DQN model in reinforcement learning to solve this problem. This model can dynamically and continuously adjust the output results through interaction with the environment, and this research uses the feature to meet the QoS requirements of IIoT devices. It is paired with the corresponding O-RAN base station to control the bandwidth utilization rate of each base station, so as to achieve the goal of load balancing based on QoS requirements. The results show that the ORAN architecture with DQN proposed in this paper is superior to the traditional energy-saving algorithm and bandwidth aware algorithm in various indicators such as load balancing, bandwidth utilization rate, and equipment service rate.
    Appears in Collections:[Graduate Institute of Communication Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML218View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明