博碩士論文 108523021 完整後設資料紀錄

DC 欄位 語言
DC.contributor通訊工程學系zh_TW
DC.creator許皓惟zh_TW
DC.creatorHao-Wei TSUen_US
dc.date.accessioned2022-8-24T07:39:07Z
dc.date.available2022-8-24T07:39:07Z
dc.date.issued2022
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=108523021
dc.contributor.department通訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract由於製造商正在開發用於性能和應用集成的 5G NR 基站 (BS),當前的 解決方案主要基於傳統的技術規範,發展智能無線資源管理技術可以優 化當前小蜂窩系統的傳輸性能。為了在 O-RAN Near-RT RIC 中構建用於 鏈路自適應的 xApp,我們使用提供的 API 來完成傳輸觀察和參數自適 應等基本功能,然後採用深度強化學習。智能代理(Agent)以收到訊息作 為狀態(State),可以動態選擇最佳的鏈路適配參數,實現高效傳輸。儘 管如此,我們將此功能打包到 xApp 中並在現實的 O-RAN 系統上進行測 試,觀察到了不錯的結果。 另一方面在超可靠低時延通信(URLLC)應用中,我們嘗試使用 5G ns-3 模擬 IIoT 工廠場景,有別於傳統的上行方式, Grant-free(GF)可以在減少延遲下,同時保持一定的可靠性。在不同的傳輸條件下,我們開發了不同的 RL 方法來動態選擇 GF,最終在數值結果中也可以看到滿意率的良好趨勢。zh_TW
dc.description.abstractAs manufacturers are developing 5G NR base stations (BS) for performance and application integration, current solutions are mainly based on conventional technical specifications. Developing intelligent wireless resource management technology could optimize and refine the current small cell system transmission performance. To build an xApp in O-RAN Near-RT RIC for link adaptation, we use the provided API to complete essential functions such as transmission observation and parameter adaptation. Then the deep reinforcement learning is adopted. Using the indication report as the state, the smart agent can dynamically select the best link adaptation parameters to achieve high-efficiency transmission. Nevertheless, we pack the agent into an xApp and test on a realistic O-RAN system with encouraging results observed. In the ultra reliable low latency communication (URLLC) application, we try to use 5G ns-3 simulation and simulate the IIoT factory scenario, which is different from the traditional uplink method. Grant-free (GF) can reduce the delay while maintaining certain reliability. Under various transmission conditions, we developed different reinforcement learning (RL) methods used to select mode dynamically. Finally, a promising trend in satisfaction rate can also be seen in the numerical result.en_US
DC.subjectO-RANzh_TW
DC.subjectxAppszh_TW
DC.subjectNear-RT RICzh_TW
DC.subjectLink adaptationzh_TW
DC.subjectGrant freezh_TW
DC.titleReinforcement Learning-Based Link Adaptation and Grant-Free Mode Selection for O-RAN Systemsen_US
dc.language.isoen_USen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明