博碩士論文 106583603 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:48 、訪客IP:3.16.69.111
姓名 雅布督(Ibrahim Abdullah Musleh Althamary)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱
(Scalable Spectrum and Connectivity Management in Satellite-Aided Vehicular Networks)
相關論文
★ 基於馬賽克特性之低失真實體電路佈局保密技術★ 多路徑傳輸控制協定下從無線區域網路到行動網路之無縫換手
★ 感知網路下具預算限制之異質性子頻段分配★ 下行服務品質排程在多天線傳輸環境下的效能評估
★ 多路徑傳輸控制協定下之整合型壅塞及路徑控制★ Opportunistic Scheduling for Multicast over Wireless Networks
★ 適用多用戶多輸出輸入系統之低複雜度比例公平性排程設計★ 利用混合式天線分配之 LTE 異質網路 UE 與 MIMO 模式選擇
★ 基於有限預算標價式拍賣之異質性頻譜分配方法★ 適用於 MTC 裝置 ID 共享情境之排程式分群方法
★ Efficient Two-Way Vertical Handover with Multipath TCP★ 多路徑傳輸控制協定下可亂序傳輸之壅塞及排程控制
★ 移動網路下適用於閘道重置之群體換手機制★ 使用率能小型基地台之拍賣是行動數據分流方法
★ 高速鐵路環境下之通道預測暨比例公平性排程設計★ 用於行動網路效能評估之混合式物聯網流量產生器
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-8-31以後開放)
摘要(中) 本論文針對5G及其後的網絡中新型無線電(NR)車聯網(V2X)通信中的資源分配問題進行了研究。這些網絡的快速發展,特別是在3GPP Release 15及其後的版本中,迫切需要支持高數據率、高可靠性、頻譜效率和低延遲的關鍵安全消息和先進應用的強大解決方案。本文提出了一種基於多智能體強化學習(MARL)的可擴展多連接頻譜管理方法,擴展了NR V2X通信的 MAAC 模型。該方法在複雜、動態環境中提高了吞吐量、成功交付率和頻譜效率,同時減少了延遲和干擾。

我們提出的算法通過適應各種智能體(agent)來適應大規模系統,反映了V2X通信的特點。Transformer機制幫助模型泛化到不同環境,提升驗證能力和總效用。將網絡劃分為較小的協作區域可以減少狀態空間,通過合作和遷移學習展示了可擴展性,從而最大限度地減少決策延遲並有效處理動態變化。這些解決方案有效管理了V2X環境的複雜性,最小化通信開銷和決策複雜性,並利用共享觀察結果提高了大規模部署的實用性。

利用Transformer架構,本方法通過在小規模地圖上集中訓練並在具有不同數量智能體和路側單元(RSU)的不同大規模地圖上進行集中和分散測試來滿足可擴展性要求。此外,引入門控循環單元(GRU)層來支持智能體通信和優化系統性能。這種創新的組合提高了學習效率和決策能力,使智能體之間能夠進行協作學習和策略共享。

第一個研究問題探討了設計一種可擴展的V2X資源分配架構,以優化共享和動態環境中的系統吞吐量、頻譜效率和數據包傳送可靠性。為此,我們開發了一種部分可觀測網絡化馬爾可夫決策過程(MDP)模型,用於分佈式多連接管理。該模型考慮了V2I、V2S和V2V鏈路在資源限制下的整體系統吞吐量、效用、頻譜效率和數據包傳送可靠性。使用城市交通模擬(SUMO)平台進行的評估顯示,該模型顯著提高了吞吐量、成功交付率和頻譜效率,同時減少了延遲和干擾。

第二個研究問題探索了先進的優化和狀態估計技術,以確保動態V2X環境中的資源管理和高性能。這包括利用部分拉格朗日乘數將複雜的優化問題轉化為基於獎勵的系統,並實施具有額外預測層的Transformer基於狀態預測。這些技術能夠準確預測整個系統狀態,提高了在複雜和動態場景中的可擴展性和狀態估計能力。所提出的方法有效地管理了計算複雜性和內存需求,確保了資源管理的穩健性和高性能。

第三個研究問題探討了如何通過增強的智能體協作機制和衛星與車輛網絡的集成來提高V2X系統的網絡可靠性和擴展連接性。引入了創新的協作機制,使智能體能夠更有效地共享經驗和策略,從而提高了學習效率和決策質量。此外,整合衛星和車輛網絡提高了網絡可靠性並擴展了連接性,促進了跨多個區域的無縫通信。結果表明,這些機制顯著提升了V2X系統的性能和可靠性。

總之,本論文為V2X通信中的資源分配挑戰提供了綜合解決方案。通過整合衛星網絡、利用先進的Transformer架構和多智能體強化學習,所提出的方法顯著提升了V2X系統的性能和可靠性。此研究為V2X技術的未來發展奠定了基礎,有助於發展更加智能和高效的交通系統。
摘要(英) This dissertation addresses the significant challenges of resource allocation in New Radio (NR) Vehicle-to-Everything (V2X) communications within 5G and beyond networks. The rapid evolution of these networks, highlighted in 3GPP Release 15 and beyond, necessitates robust solutions to support critical safety messages and advanced applications that demand high data rates, reliability, spectrum efficiency, and low latency. The proposed solution is a scalable multi-connectivity spectrum management approach based on Multi-Agent Reinforcement Learning (MARL), extending the Multi-Agent Actor-Critic (MAAC) model for NR V2X communications. This approach enhances throughput, success delivery rates, and spectrum efficiency while reducing latency and interference in complex, dynamic environments.

Our proposed algorithms adapt to large-scale systems by accommodating various agents, reflecting the nature of V2X communication. Transformer mechanisms help generalize models to varying environments, improving validation capacity and total utility. Clustering the network into smaller collaboration regions reduces the state spaces, demonstrating scalability by minimizing decision delays and efficiently handling dynamic changes through cooperative and transfer learning. These solutions effectively manage the complexities of V2X environments, minimizing communication overhead and decision-making complexity and utilizing shared observations to enhance practicality in large-scale deployments.

Leveraging transformer architectures, this approach satisfies scalability requirements by training on a small-scale map in a centralized manner and testing on different large-scale maps with varying numbers of agents and Roadside Units (RSUs) in both centralized and decentralized manners. A Gated Recurrent Unit (GRU) layer is also introduced to support agent communication and optimize system performance. This innovative combination enhances learning efficiency and decision-making, enabling collaborative learning and policy sharing among agents.

The first research question addresses the design of a scalable V2X resource allocation architecture that optimizes system throughput, spectrum efficiency, and packet delivery reliability in shared and dynamic environments. This is achieved by developing a Partially Observable Networked Markov Decision Process (MDP) model for distributed multi-connectivity management. The model considers overall system throughput, utility, spectrum efficiency, and packet delivery reliability of V2I, V2S, and V2V links with resource limitations. Evaluations using the Simulation of Urban Mobility (SUMO) platform demonstrated that this model significantly improves throughput, success delivery rates, and spectrum efficiency while reducing latency and interference.

The second research question explores advanced optimization and state estimation techniques to ensure robust resource management and high performance in dynamic V2X environments. This involves utilizing Partial Lagrange multipliers to transform complex optimization problems into reward-based systems and implementing transformer-based state prediction with an additional prediction layer. These techniques accurately forecast the full system state, improving scalability and state estimation in complex and dynamic scenarios. The proposed methods efficiently manage computational complexity and memory requirements, ensuring robust resource management and high performance.

The third research question examines how enhanced agent collaboration mechanisms and the integration of satellite and vehicular networks can improve network reliability and expand connectivity in V2X systems. Innovative collaboration mechanisms are introduced, enabling agents to share experiences and policies more effectively, thus enhancing both learning efficiency and decision-making quality. Additionally, integrating satellite and vehicular networks improves network reliability and expands connectivity, facilitating seamless communication across diverse areas. The results show that these mechanisms significantly enhance the performance and reliability of V2X systems.

This dissertation provides a comprehensive solution to resource allocation challenges in V2X communications. By integrating satellite networks, utilizing advanced transformer architectures, and employing multi-agent reinforcement learning, the proposed approach significantly enhances the performance and reliability of V2X systems. This research lays the groundwork for future advancements in V2X technology, contributing to developing more intelligent and efficient transportation systems.
關鍵字(中) ★ Multi-Connectivity
★ 5G
★ Transformer
★ Satellite Network
★ NR V2X
關鍵字(英) ★ Multi-Connectivity
★ 5G
★ 6G
★ Sidelink
★ NR V2X
★ Satellite Network
★ Spectrum Management
★ Multi-Agent Reinforcement Learning
★ Machine Learning
★ Transformer
★ Clustering
★ Self-imitation Learning
論文目次 Table of Contents
摘要 ii
Abstract iii
Acknowledgements v
List of Figures x
List of Tables xi
Acronym List xii
1 Introduction 1
1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Overview of V2X Communication . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 V2X Communication Modes . . . . . . . . . . . . . . . . . . . . . 4
1.3 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Literature Review 12
2.1 Multi-Agent Reinforcement Learning . . . . . . . . . . . . . . . . . . . . 12
2.1.1 Fundamentals of MARL . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Advancements and Applications of MARL in V2X Networks . . . . . . . . 14
2.3 V2X Network Resource Management . . . . . . . . . . . . . . . . . . . . 18
2.4 Categories of Requirements to Support Enhanced V2X Scenarios . . . . . 22
2.5 V2X Topics of Interest and Challenges . . . . . . . . . . . . . . . . . . . . 24
3 System Model 27
3.1 Channel Models for V2X Communication Systems . . . . . . . . . . . . . 27
3.1.1 V2I Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.2 V2V Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.3 V2S Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 V2X Communication System . . . . . . . . . . . . . . . . . . . . . . . . 30
4 Optimized Connectivity Management for Satellite-Aided Vehicular Networks 34
4.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Multi-Agent Reinforcement Learning for V2X . . . . . . . . . . . . . . . 36
4.2.1 POMDP Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2.2 Multi-Agent Extension and State Estimation . . . . . . . . . . . . . 38
4.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3.2 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 43
4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5 Scalable Resource Management with Clustering and Transformer Mecha-
nisms 50
5.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.2.1 V2X Spectrum Management Problem . . . . . . . . . . . . . . . . . 52
5.3 Networked Multi-agent MDP for V2X . . . . . . . . . . . . . . . . . . . . 55
5.3.1 Partial Observable Networked Multi-Agent MDP Model . . . . . . . 55
5.3.2 Networked Multi-Agent Extension of A2C . . . . . . . . . . . . . . 58
5.4 Clustering-Based Scalability with Transformer State Estimation . . . . . . 61
5.4.1 State Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.4.2 Enhanced Scalability with Transformer-Based Estimation . . . . . . 62
5.4.3 Experience and Policy Sharing . . . . . . . . . . . . . . . . . . . . 65
5.4.4 Decentralized RSU Clustering . . . . . . . . . . . . . . . . . . . . . 66
5.4.5 Training and Execution Algorithms . . . . . . . . . . . . . . . . . . 67
5.4.6 Computational Complexity Analysis . . . . . . . . . . . . . . . . . 69
6 Experimental Results 72
6.1 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.2 Simulation Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.3 Simulation Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . 75
6.3.1 Impact of Observation Sharing Among Agents . . . . . . . . . . . . 75
6.3.2 Reward Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.3.3 Action Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.3.4 Communication Overhead Analysis . . . . . . . . . . . . . . . . . . 80
6.3.5 Mode Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.3.6 Effective Throughput . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3.7 Overall Successful Transmission Rate . . . . . . . . . . . . . . . . 82
6.3.8 Clustering Performance . . . . . . . . . . . . . . . . . . . . . . . . 83
6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7 Conclusion and Future Work 92
7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Publications 95
Bibliography 97
參考文獻 [1] 3GPP. V2X Services based on NR; User Equipment (UE) radio transmission and reception; (Release 16). Technical report, 3GPP, March 2021.
[2] 3GPP. 3GPP TR 38.821: Solutions for NR to support non-terrestrial networks (NTN). Technical report, 3GPP, April 2023.
[3] 3GPP. 3rd Generation Partnership Project; Technical Specification Group Radio
Access Network; Study on LTE-based V2X Services (Release 14). Technical report, 3GPP, Jun 2016.
[4] Xiaohu You, Cheng-Xiang Wang, Jie Huang, Xiqi Gao, Zaichen Zhang, Mao Wang, Yongming Huang, Chuan Zhang, Yanxiang Jiang, Jiaheng Wang, et al. Towards 6G wireless communication networks: Vision, enabling technologies, and
new paradigm shifts. Science China Information Sciences, 64(1):1–74, 2021.
[5] 3GPP. Study on evaluation methodology of new Vehicle-to-Everything (V2X) use cases for LTE and NR (Release 15). Technical report, 3GPP, Jun. 2019.
[6] Mario H Castaqeda Garcia, Mate Molina-Galan, Alejandro andBoban, Javier Gozalvez, Baldomero Coll-Perales, Taylan S¸ ahin, and Apostolos Kousaridas. A Tutorial on 5G NR V2X Communications. IEEE Communications Surveys & Tutorials,
2021.
[7] Di Zhou, Min Sheng, Jiandong Li, and Zhu Han. Aerospace Integrated Networks Innovation for Empowering 6G: A Survey and Future Challenges. IEEE Communications Surveys & Tutorials, 2023.
[8] Boyu Deng, Chunxiao Jiang, Jian Yan, Ning Ge, Song Guo, and Shanghong Zhao. Joint multigroup precoding and resource allocation in integrated terrestrial-satellite
networks. IEEE Transactions on Vehicular Technology, 68(8):8075–8090, 2019.
[9] Denise Joanitah Birabwa, Daniel Ramotsoela, and Neco Ventura. Service-Aware User Association and Resource Allocation in Integrated Terrestrial and Non-
Terrestrial Networks: A Genetic Algorithm Approach. IEEE Access, 10:104337–104357, 2022.
[10] Mehdi Harounabadi, Dariush Mohammad Soleymani, Shubhangi Bhadauria, Martin Leyh, and Elke Roth-Mandutz. V2X in 3GPP standardization: NR sidelink in
release-16 and beyond. IEEE Communications Standards Magazine, 5(1):12–21, 2021.
[11] Yusuke Koda, Ryogo Okura, and Hiroshi Harada. Toward 3GPP Sidelink-Based Millimeter Wave Wireless Personal Area Network for Out-of-Coverage Scenarios. IEEE Internet of Things Journal, 2024.
[12] Xinran Zhang, Mugen Peng, Shi Yan, and Yaohua Sun. Deep-reinforcement-learning-based mode selection and resource allocation for cellular V2X commu-
nications. IEEE Internet of Things Journal, 7(7):6380–6391, 2019.
[13] Peng Qin, Yang Fu, Jing Zhang, Suiyan Geng, Jiayan Liu, and Xiongwen Zhao.
DRL-Based Resource Allocation and Trajectory Planning for NOMA-Enabled
Multi-UAV Collaborative Caching 6 G Network. IEEE Transactions on Vehicu-
lar Technology, 2024.
[14] Sawsan AbdulRahman, Ouns Bouachir, Safa Otoum, and Azzam Mourad. CRAS-
FL: Clustered resource-aware scheme for federated learning in vehicular networks.
Vehicular Communications, page 100769, 2024.
[15] Qingxu Fu, Tenghai Qiu, Jianqiang Yi, Zhiqiang Pu, and Xiaolin Ai. Self-
Clustering Hierarchical Multi-Agent Reinforcement Learning with Extensible Co-
operation Graph. arXiv preprint arXiv:2403.18056, 2024.
[16] Mete Yavuz and ¨Omer Cihan Kivanc¸. Optimization of a Cluster-Based Energy
Management System using Deep Reinforcement Learning without Affecting Pro-
sumer Comfort: V2X Technologies and Peer-to-Peer Energy Trading. IEEE Ac-
cess, 2024.
[17] Tianxu Li, Kun Zhu, Nguyen Cong Luong, Dusit Niyato, Qihui Wu, Yang Zhang,
and Bing Chen. Applications of Multi-Agent Reinforcement Learning in Future
Internet: A Comprehensive Survey. IEEE Communications Surveys & Tutorials,
2022.
[18] Le Liang, Hao Ye, and Geoffrey Ye Li. Spectrum sharing in vehicular networks
based on multi-agent reinforcement learning. IEEE Journal on Selected Areas in
Communications, 37(10):2282–2292, 2019.
[19] Xuan Zhang, Hengxi Zhang, Huaze Tang, Le Liang, Ling Cheng, Xinlei Chen,
Wenbo Ding, and Xiao-Ping Zhang. A Scalable Mean-Field MARL Framework
for Multi-Objective V2X Resource Allocation. IEEE Transactions on Intelligent
Vehicles, 2024.
[20] Thanh Thi Nguyen, Ngoc Duy Nguyen, and Saeid Nahavandi. Deep reinforcement
learning for multiagent systems: A review of challenges, solutions, and applica-
tions. IEEE transactions on cybernetics, 50(9):3826–3839, 2020.
[21] Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, and Tamer Basar. Fully
decentralized multi-agent reinforcement learning with networked agents. In Inter-
national Conference on Machine Learning, pages 5872–5881. PMLR, 2018.
[22] Lei Lei, Yue Tan, Kan Zheng, Shiwen Liu, Kuan Zhang, and Xuemin Shen. Deep
reinforcement learning for autonomous internet of things: Model, applications and
challenges. IEEE Communications Surveys & Tutorials, 22(3):1722–1760, 2020.
[23] Tong Wu, Pan Zhou, Binghui Wang, Ang Li, Xueming Tang, Zichuan Xu, Kai
Chen, and Xiaofeng Ding. Joint Traffic Control and Multi-Channel Reassign-
ment for Core Backbone Network in SDN-IoT: A Multi-Agent Deep Reinforce-
ment Learning Approach. IEEE Transactions on Network Science and Engineer-
ing, 2020.
[24] Amal Feriani and Ekram Hossain. Single and Multi-Agent Deep Reinforcement
Learning for AI-Enabled Wireless Networks: A Tutorial. IEEE Communications
Surveys & Tutorials, 2021.
[25] Rose E Wang, Michael Everett, and Jonathan P How. R-maddpg for par-
tially observable environments and limited communication. arXiv preprint
arXiv:2002.06684, 2020.
[26] Zhen Gao, Lei Yang, and Yu Dai. Large-scale Cooperative Task Offloading and Re-
source Allocation in Heterogeneous MEC Systems via Multi-Agent Reinforcement
Learning. IEEE Internet of Things Journal, 2023.
[27] M Pranav, AK Raghavendra, Rudresh S Patil, Kushal V Palankar, and D Anna-
purna. Enhancing 5G Cellular Connectivity: A Comprehensive Analysis of NR
Sidelink with relay. In 2024 IEEE 9th International Conference for Convergence
in Technology (I2CT), pages 1–6. IEEE, 2024.
[28] Muhammad Usman, Marwa Qaraqe, Muhammad Rizwan Asghar, Anteneh A Ge-
bremariam, Imran Shafique Ansari, Fabrizio Granelli, and Qammer H Abbasi. A
business and legislative perspective of V2X and mobility applications in 5G net-
works. IEEE access, 8:67426–67435, 2020.
[29] Bach Long Nguyen, Duy T Ngo, and Hai L Vu. Vehicle Communications for
Infotainment Applications. In Handbook of Real-Time Computing, pages 705–722.
Springer, 2022.
[30] Kai Lin, Chensi Li, Pasquale Pace, and Giancarlo Fortino. Multi-level cluster-
based satellite-terrestrial integrated communication in internet of vehicles. Com-
puter Communications, 149:44–50, 2020.
[31] James Meijers, Panagiotis Michalopoulos, Shashank Motepalli, Gengrui Zhang,
Shiquan Zhang, Andreas Veneris, and Hans-Arno Jacobsen. Blockchain for v2x:
Applications and architectures. IEEE Open Journal of Vehicular Technology,
3:193–209, 2022.
[32] Muhammad Shahid Mastoi, Shengxian Zhuang, Hafiz Mudassir Munir, Malik
Haris, Mannan Hassan, Mohammed Alqarni, and Basem Alamri. A study of
charging-dispatch strategies and vehicle-to-grid technologies for electric vehicles
in distribution networks. Energy Reports, 9:1777–1806, 2023.
[33] Abdelkader Mekrache, Abbas Bradai, Emmanuel Moulay, and Samir Dawaliby.
Deep reinforcement learning techniques for vehicular networks: Recent advances
and future trends towards 6G. Vehicular Communications, 33:100398, 2022.
[34] Haibo Zhou, Wenchao Xu, Jiacheng Chen, and Wei Wang. Evolutionary V2X
technologies toward the Internet of vehicles: Challenges and opportunities. Pro-
ceedings of the IEEE, 108(2):308–323, 2020.
[35] Azim Eskandarian, Chaoxian Wu, and Chuanyang Sun. Research advances and
challenges of autonomous and connected ground vehicles. IEEE Transactions on
Intelligent Transportation Systems, 22(2):683–711, 2019.
[36] P Rajalakshmi et al. Towards 6G V2X Sidelink: Survey of Resource Allocation-
Mathematical Formulations, Challenges, and Proposed Solutions. IEEE Open
Journal of Vehicular Technology, 2024.
[37] Shuangwu Chen, Zhen Yao, Xiaofeng Jiang, Jian Yang, and Lajos Hanzo. Multi-
Agent Deep Reinforcement Learning Based Cooperative Edge Caching for Ultra-
Dense Next-Generation Networks. IEEE Transactions on Communications, 2020.
[38] Le Liang, Hao Ye, and Geoffrey Ye Li. Towards Intelligent Vehicular Networks: A
Machine Learning Framework. IEEE Internet of Things Journal, 2018.
[39] Hao Ye, Le Liang, Geoffrey Ye Li, JoonBeom Kim, Lu Lu, and May Wu. Machine
learning for vehicular networks. arXiv preprint arXiv:1712.07143, 2017.
[40] Wang Tong, Azhar Hussain, Wang Xi Bo, and Sabita Maharjan. Artificial Intelli-
gence for Vehicle-to-Everything: A Survey. IEEE Access, 7:10823–10843, 2019.
[41] Xiangyu Zhao, Long Xia, Liang Zhang, Zhuoye Ding, Dawei Yin, and Jiliang Tang.
Deep reinforcement learning for page-wise recommendations. In Proceedings of
the 12th ACM Conference on Recommender Systems, pages 95–103, 2018.
[42] Helin Yang, Xianzhong Xie, and Michel Kadoch. Intelligent Resource Man-
agement Based on Reinforcement Learning for Ultra-Reliable and Low-Latency
IoV Communication Networks. IEEE Transactions on Vehicular Technology,
68(5):4157–4169, 2019.
[43] Frans A Oliehoek. Decentralized pomdps. In Reinforcement Learning, pages 471–
503. Springer, 2012.
[44] Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially ob-
servable mdps. In 2015 AAAI Fall Symposium Series, 2015.
[45] Kang Tan, Duncan Bremner, Julien Le Kernec, Lei Zhang, and Muhammad Imran.
Machine learning in vehicular networking: An overview. Digital Communications
and Networks, 8(1):18–24, 2022.
[46] Furong Chai, Qi Zhang, Haipeng Yao, Xiangjun Xin, Fu Wang, Minrui Xu, Ze-
hui Xiong, and Dusit Niyato. Multi-Agent DDPG based Resource Allocation in
NOMA-enabled Satellite IoT. IEEE Transactions on Communications, 2024.
[47] Amjad Iqbal, Mau-Luen Tham, Yi Jie Wong, Gabriel Wainer, Yong Xu Zhu, Tasos
Dagiuklas, et al. Empowering Non-Terrestrial Networks with Artificial Intelli-
gence: A Survey. IEEE Access, 2023.
[48] Rana Muhammad Sohaib, Oluwakayode Onireti, Yusuf Sambo, Rafiq Swash, and
Muhammad Imran. Energy Efficient Resource Allocation Framework Based on
Dynamic Meta-Transfer Learning for V2X Communications. IEEE Transactions
on Network and Service Management, 2024.
[49] Dingbang Liu, Fenghui Ren, Jun Yan, Guoxin Su, Wen Gu, and Shohei Kato. Scal-
ing up multi-agent reinforcement learning: An extensive survey on scalability is-
sues. IEEE Access, 2024.
[50] Afshin Oroojlooy and Davood Hajinezhad. A review of cooperative multi-agent
deep reinforcement learning. Applied Intelligence, 53(11):13677–13722, 2023.
[51] Junya Ikemoto and Toshimitsu Ushio. Deep reinforcement learning under signal
temporal logic constraints using Lagrangian relaxation. IEEE Access, 10:114814–
114828, 2022.
[52] Wenzhe Li, Hao Luo, Zichuan Lin, Chongjie Zhang, Zongqing Lu, and De-
heng Ye. A survey on transformers in reinforcement learning. arXiv preprint
arXiv:2301.03044, 2023.
[53] Liangshun Wu, Junsuo Qu, Shilin Li, Cong Zhang, Jianbo Du, Xiang Sun, and
Jiehan Zhou. Attention-Augmented MADDPG in NOMA-Based Vehicular Mobile
Edge Computational Offloading. IEEE Internet of Things Journal, 2024.
[54] Yuhang Wang, Ying He, F Richard Yu, Qiuzhen Lin, and Victor CM Leung. Ef-
ficient resource allocation in multi-UAV assisted vehicular networks with security
constraint and attention mechanism. IEEE Transactions on Wireless Communica-
tions, 2022.
[55] Md Noor-A-Rahim, Zilong Liu, Haeyoung Lee, GG Md Nawaz Ali, Dirk Pesch,
and Pei Xiao. A survey on resource allocation in vehicular networks. IEEE Trans-
actions on Intelligent Transportation Systems, 2020.
[56] Junhui Zhao, Fajin Hu, Jiahang Li, and Yiwen Nie. Multi-agent deep reinforce-
ment learning based resource management in heterogeneous V2X networks. Digi-
tal Communications and Networks, 2023.
[57] Muhammad Ikram Ashraf, Mehdi Bennis, Cristina Perfecto, and Walid Saad. Dy-
namic proximity-aware resource allocation in vehicle-to-vehicle (V2V) communi-
cations. In 2016 IEEE Globecom Workshops (GC Wkshps), pages 1–6. IEEE, 2016.
[58] Qian Long, Zihan Zhou, Abhibav Gupta, Fei Fang, Yi Wu, and Xiaolong
Wang. Evolutionary Population Curriculum for Scaling Multi-Agent Reinforce-
ment Learning. arXiv preprint arXiv:2003.10423, 2020.
[59] Ping Xiang, Hangguan Shan, Miao Wang, Zhiyu Xiang, and Zhenguo Zhu. Multi-
agent RL enables decentralized spectrum access in vehicular networks. IEEE
Transactions on Vehicular Technology, 70(10):10750–10762, 2021.
[60] Yuxin Ji, Yu Wang, Haitao Zhao, Guan Gui, Haris Gacanin, Hikmet Sari, and
Fumiyuki Adachi. Multi-Agent Reinforcement Learning Resources Allocation
Method Using Dueling Double Deep Q-Network in Vehicular Networks. IEEE
Transactions on Vehicular Technology, 2023.
[61] Hao Ye and Geoffrey Ye Li. Deep reinforcement learning for resource allocation
in V2V communications. In 2018 IEEE International Conference on Communica-
tions (ICC), pages 1–6. IEEE, 2018.
[62] Hao Zhou, Xiaoyan Wang, Zhi Liu, Yusheng Ji, and Shigeki Yamada. Resource
Allocation for SVC Streaming Over Cooperative Vehicular Networks. IEEE Trans-
actions on Vehicular Technology, 67(9):7924–7936, 2018.
[63] Zhou Su, Yilong Hui, Qichao Xu, Tingting Yang, Jianyi Liu, and Yunjian Jia. An
Edge Caching Scheme to Distribute Content in Vehicular Networks. IEEE Trans-
actions on Vehicular Technology, 2018.
[64] Hao Ye, Geoffrey Ye Li, and Biing-Hwang Fred Juang. Deep reinforcement learn-
ing based resource allocation for V2V communications. IEEE Transactions on
Vehicular Technology, 68(4):3163–3173, 2019.
[65] Yi-Han Xu, Cheng-Cheng Yang, Min Hua, and Wen Zhou. Deep Deterministic
Policy Gradient (DDPG)-Based Resource Allocation Scheme for NOMA Vehicular
Communications. IEEE Access, 8:18797–18807, 2020.
[66] Yueyun Chen, Zhuo Zeng, Taohua Chen, Zushen Liu, and Alan Yang. A Capacity
Improving Scheme in Multi-RSUs Deployed V2I. CMC-COMPUTERS MATERI-
ALS & CONTINUA, 60(2):835–853, 2019.
[67] Vladimir R de Lima and Marcello LR de Campos. Fully Distributed Multi-Agent
Processing Strategy Applied to Vehicular Networks. Vehicular Communications,
page 100806, 2024.
[68] Xiaoqiang Wang, Liangjun Ke, Zhimin Qiao, and Xinghua Chai. Large-scale traffic
signal control using a novel multiagent reinforcement learning. IEEE transactions
on cybernetics, 51(1):174–187, 2020.
[69] Tianshu Chu, Jie Wang, Lara Codec`a, and Zhaojian Li. Multi-agent deep reinforce-
ment learning for large-scale traffic signal control. IEEE Transactions on Intelligent
Transportation Systems, 2019.
[70] 3GPP. Technical Specification Group Services and System Aspects; Service re-
quirements for next generation new services and markets; Stage 1. Technical Re-
port 22.261, 3rd Generation Partnership Project (3GPP), 2024. Version 18.13.0.
[71] Inc. Qualcomm Technologies. 5G Advanced Release 19 Presentation, 2024. Ac-
cessed: 2024-06-05.
[72] Cheolkyu Shin, Emad Farag, Hyunseok Ryu, Miao Zhou, and Younsun Kim.
Vehicle-to-everything (v2x) evolution from 4g to 5g in 3gpp: Focusing on resource
allocation aspects. IEEE Access, 11:18689–18703, 2023.
[73] Anuja Nair and Sudeep Tanwar. Resource allocation in V2X communication:
State-of-the-art and research challenges. Physical Communication, page 102351,
2024.
[74] Marko Angjelichinoski, Kasper Fløe Trillingsgaard, and Petar Popovski. A statis-
tical learning approach to ultra-reliable low latency communication. arXiv preprint
arXiv:1809.05515, 2018.
[75] Zhang Liwen, Faizan Qamar, Mahrukh Liaqat, Mhd Nour Hindia, and Khairul
Akram Zainol Ariffin. Towards Efficient 6G IoT Networks: A Perspective on Re-
source Optimization Strategies, Challenges, and Future Directions. IEEE Access,
2024.
[76] Zhipeng Liu, Yinhui Han, Jianwei Fan, Lin Zhang, and Yunzhi Lin. Joint Opti-
mization of Spectrum and Energy Efficiency Considering the C-V2X Security: A
Deep Reinforcement Learning Approach. arXiv preprint arXiv:2003.10620, 2020.
[77] Marie-Theres Suer, Christoph Thein, Hugues Tchouankem, and Lars Wolf. Multi-
Connectivity as an Enabler for Reliable Low Latency Communications—An
Overview. IEEE Communications Surveys & Tutorials, 22(1):156–169, 2020.
[78] Apostolos Kousaridas, Chan Zhou, David Mart´ın-Sacrist´an, David Garcia-Roger,
Jose F Monserrat, and Sandra Roger. Multi-connectivity management for 5G V2X
communication. In 2019 IEEE 30th Annual International Symposium on Personal,
Indoor and Mobile Radio Communications (PIMRC), pages 1–7. IEEE, 2019.
[79] Alexander Rabitsch, Karl-Johan Grinnemo, Anna Brunstrom, Henrik Abrahams-
son, Fehmi Ben Abdesslem, Stefan Alfredsson, and Bengt Ahlgren. Utilizing
Multi-Connectivity to Reduce Latency and Enhance Availability for Vehicle to In-
frastructure Communication. IEEE Transactions on Mobile Computing, 2020.
[80] Itamar Arel, Cong Liu, T Urbanik, and AG Kohls. Reinforcement learning-based
multi-agent system for network traffic signal control. IET Intelligent Transport
Systems, 4(2):128–135, 2010.
[81] Qingmiao Zhang, Lidong Zhu, Yanyan Chen, and Shan Jiang. Energy-efficient
traffic offloading for RSMA-based hybrid satellite terrestrial networks with deep
reinforcement learning. China Communications, 21(2):49–58, 2024.
[82] Jun Wang, Daquan Feng, Shengli Zhang, Jianhua Tang, and Tony QS Quek. Com-
putation offloading for mobile edge computing enabled vehicular networks. IEEE
Access, 7:62624–62632, 2019.
[83] Yinlin Ren, Xingyu Chen, Song Guo, Shaoyong Guo, and Ao Xiong. Blockchain-
Based VEC Network Trust Management: A DRL Algorithm for Vehicular Ser-
vice Offloading and Migration. IEEE Transactions on Vehicular Technology,
70(8):8148–8160, 2021.
[84] Lin Yao, Ailun Chen, Jing Deng, Jianbang Wang, and Guowei Wu. A cooperative
caching scheme based on mobility prediction in vehicular content centric networks.
IEEE Transactions on Vehicular Technology, 67(6):5435–5444, 2018.
[85] Degan Zhang, Hui Ge, Ting Zhang, Yu-Ya Cui, Xiaohuan Liu, and Guoqiang Mao.
New Multi-Hop Clustering Algorithm for Vehicular Ad Hoc Networks. IEEE
Transactions on Intelligent Transportation Systems, 20(4):1517–1530, Apr 2019.
[86] Mengying Ren, Jun Zhang, Lyes Khoukhi, Houda Labiod, and V´eronique V`eque.
A review of clustering algorithms in VANETs. Annals of Telecommunications,
pages 1–23, 2021.
[87] Kang Tan, Julien Le Kernec, Muhammad Imran, and Duncan Bremner. Clustering
algorithm in vehicular ad-hoc networks: A brief summary. In 2019 UK/China
Emerging Technologies (UCET), pages 1–5. IEEE, 2019.
[88] Jiahui Li, Geng Sun, Qingqing Wu, Dusit Niyato, Jiawen Kang, Abbas Jamalipour,
and Victor Leung. Collaborative Ground-Space Communications via Evolutionary
Multi-objective Deep Reinforcement Learning. arXiv preprint arXiv:2404.07450,
2024.
[89] Bo Guo, Liwei Deng, Ruisheng Wang, Wenchao Guo, Alex Hay-Man Ng, and
Wenfeng Bai. MCTNet: Multiscale Cross-attention based Transformer Network
for Semantic Segmentation of Large-scale Point Cloud. IEEE Transactions on
Geoscience and Remote Sensing, 2023.
[90] Ahlem Masmoudi, Kais Mnif, and Faouzi Zarai. A survey on radio resource allo-
cation for V2X communication. Wireless Communications and Mobile Computing,
2019, 2019.
[91] Waleed Ahsan, Wenqiang Yi, Zhijin Qin, Yuanwei Liu, and Arumugam Nal-
lanathan. Resource allocation in uplink NOMA-IoT networks: a reinforcement-
learning approach. IEEE Transactions on Wireless Communications, 20(8):5083–
5098, 2021.
[92] Gagangeet Singh Aujla, Rajat Chaudhary, Neeraj Kumar, Joel JPC Rodrigues,
and Alexey Vinel. Data offloading in 5G-enabled software-defined vehicular net-
works: A stackelberg-game-based approach. IEEE Communications Magazine,
55(8):100–108, 2017.
[93] Ke Zhang, Yuming Mao, Supeng Leng, Yejun He, and Yan Zhang. Mobile-edge
computing for vehicular networks: A promising network paradigm with predictive
off-loading. IEEE Vehicular Technology Magazine, 12(2):36–44, 2017.
[94] Thanh Thi Nguyen, Ngoc Duy Nguyen, and Saeid Nahavandi. Deep Reinforcement Learning for Multi-Agent Systems: A Review of Challenges, Solutions and Applications. arXiv preprint arXiv:1812.11794, 2018.
[95] Zhaojun Lu, Gang Qu, and Zhenglin Liu. A Survey on Recent Advances in Vehicular Network Security, Trust, and Privacy. IEEE Transactions on Intelligent Transportation Systems, 20(2), 2019.
[96] Marco Giordani, Takayuki Shimizu, Andrea Zanella, Takamasa Higuchi, Onur Al-tintas, and Michele Zorzi. Path loss models for V2V mmWave communication: Performance evaluation and open challenges. In 2019 IEEE 2nd Connected and Automated Vehicles Symposium (CAVS), pages 1–5. IEEE, 2019.
[97] 3GPP. Study on channel model for frequencies from 0.5 to 100 GHz (Release 16). Technical report, 3GPP, Dec. 2019.
[98] P Series. Propagation data and prediction methods required for the design of Earth-space telecommunication systems. Recommendation ITU-R P.618-13, 2017.
[99] Po-Yen Chen, Yu-Heng Zheng, Ibrahim Althamary, Jann-Long Chern, and Chih-Wei Huang. Multi-Agent Deep Reinforcement Learning for Spectrum Management in V2X with Social Roles. In IEEE Global Communications Conference (GLOBECOM), Kuala Lumpur, Malaysia, Dec 2023.
[100] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need.
Advances in neural information processing systems, 30, 2017.
[101] Hao Lin, Yue He, Fanzhang Li, Quan Liu, Bangjun Wang, and Fei Zhu. Taking complementary advantages: Improving exploration via double self-imitation learning in procedurally-generated environments. Expert Systems with Applications, 238:122145, 2024.
[102] Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. Self-imitation learning. In International Conference on Machine Learning, pages 3878–3887. PMLR, 2018.
[103] Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip HS Torr, Pushmeet Kohli, and Shimon Whiteson. Stabilising experience replay for deep multi-agent reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1146–1155. JMLR. org, 2017.
[104] Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2623–2631, 2019.
[105] North American Aerospace Defense Command (NORAD). Starlink Satellite Elements. http://celestrak.com/NORAD/elements/, 2023. Accessed: August 2023.
[106] Seyedali Mirjalili. Evolutionary algorithms and neural networks: Theory and applications, volume 780. Springer, 2018.
[107] Charles W Chase Jr. Innovations in business forecasting. The Journal of Business Forecasting, 33(3):22, 2014.
[108] Antonio Frangioni, Bernard Gendron, and Enrico Gorgone. Dynamic smoothness parameter for fast gradient methods. Optimization Letters, 12:43–53, 2018.
[109] Ibrahim Althamary, Jun-Yong Lin, and Chih-Wei Huang. Spectrum Management with Congestion Avoidance for V2X Based on Multi-Agent Reinforcement Learning. In 2020 IEEE Globecom Workshops (GC Wkshps), pages 1–6. IEEE, 2020.
[110] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928–1937, 2016.
[111] Zheng Li and Caili Guo. Multi-agent deep reinforcement learning based spectrum allocation for D2D underlay communications. IEEE Transactions on Vehicular Technology, 69(2):1828–1840, 2019.
[112] Xiaonan Liu and Yansha Deng. Learning-based Prediction, Rendering and Association Optimization for MEC-enabled Wireless Virtual Reality (VR) Network. IEEE Transactions on Wireless Communications, 2021.
[113] Pablo Alvarez Lopez, Michael Behrisch, Laura Bieker-Walz, Jakob Erdmann, YunPang Fl¨otter¨od, Robert Hilbrich, Leonhard L¨ucken, Johannes Rummel, Peter Wagner, and Evamarie Wießner. Microscopic traffic simulation using sumo. In The 21st IEEE International Conference on Intelligent Transportation Systems. IEEE, 2018.
[114] Lixia Xue, Yuchen Yang, and Decun Dong. Roadside infrastructure planning scheme for the urban vehicular networks. Transportation Research Procedia, 25:1380–1396, 2017.
[115] Anran Du, Yicheng Shen, and Lewis Tseng. CarML: distributed machine learning in vehicular clouds. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, pages 1–3, 2020.
指導教授 黃志煒(Prof. Chih-Wei Huang) 審核日期 2024-8-19
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明