參考文獻 |
[1] 3GPP. V2X Services based on NR; User Equipment (UE) radio transmission and reception; (Release 16). Technical report, 3GPP, March 2021.
[2] 3GPP. 3GPP TR 38.821: Solutions for NR to support non-terrestrial networks (NTN). Technical report, 3GPP, April 2023.
[3] 3GPP. 3rd Generation Partnership Project; Technical Specification Group Radio
Access Network; Study on LTE-based V2X Services (Release 14). Technical report, 3GPP, Jun 2016.
[4] Xiaohu You, Cheng-Xiang Wang, Jie Huang, Xiqi Gao, Zaichen Zhang, Mao Wang, Yongming Huang, Chuan Zhang, Yanxiang Jiang, Jiaheng Wang, et al. Towards 6G wireless communication networks: Vision, enabling technologies, and
new paradigm shifts. Science China Information Sciences, 64(1):1–74, 2021.
[5] 3GPP. Study on evaluation methodology of new Vehicle-to-Everything (V2X) use cases for LTE and NR (Release 15). Technical report, 3GPP, Jun. 2019.
[6] Mario H Castaqeda Garcia, Mate Molina-Galan, Alejandro andBoban, Javier Gozalvez, Baldomero Coll-Perales, Taylan S¸ ahin, and Apostolos Kousaridas. A Tutorial on 5G NR V2X Communications. IEEE Communications Surveys & Tutorials,
2021.
[7] Di Zhou, Min Sheng, Jiandong Li, and Zhu Han. Aerospace Integrated Networks Innovation for Empowering 6G: A Survey and Future Challenges. IEEE Communications Surveys & Tutorials, 2023.
[8] Boyu Deng, Chunxiao Jiang, Jian Yan, Ning Ge, Song Guo, and Shanghong Zhao. Joint multigroup precoding and resource allocation in integrated terrestrial-satellite
networks. IEEE Transactions on Vehicular Technology, 68(8):8075–8090, 2019.
[9] Denise Joanitah Birabwa, Daniel Ramotsoela, and Neco Ventura. Service-Aware User Association and Resource Allocation in Integrated Terrestrial and Non-
Terrestrial Networks: A Genetic Algorithm Approach. IEEE Access, 10:104337–104357, 2022.
[10] Mehdi Harounabadi, Dariush Mohammad Soleymani, Shubhangi Bhadauria, Martin Leyh, and Elke Roth-Mandutz. V2X in 3GPP standardization: NR sidelink in
release-16 and beyond. IEEE Communications Standards Magazine, 5(1):12–21, 2021.
[11] Yusuke Koda, Ryogo Okura, and Hiroshi Harada. Toward 3GPP Sidelink-Based Millimeter Wave Wireless Personal Area Network for Out-of-Coverage Scenarios. IEEE Internet of Things Journal, 2024.
[12] Xinran Zhang, Mugen Peng, Shi Yan, and Yaohua Sun. Deep-reinforcement-learning-based mode selection and resource allocation for cellular V2X commu-
nications. IEEE Internet of Things Journal, 7(7):6380–6391, 2019.
[13] Peng Qin, Yang Fu, Jing Zhang, Suiyan Geng, Jiayan Liu, and Xiongwen Zhao.
DRL-Based Resource Allocation and Trajectory Planning for NOMA-Enabled
Multi-UAV Collaborative Caching 6 G Network. IEEE Transactions on Vehicu-
lar Technology, 2024.
[14] Sawsan AbdulRahman, Ouns Bouachir, Safa Otoum, and Azzam Mourad. CRAS-
FL: Clustered resource-aware scheme for federated learning in vehicular networks.
Vehicular Communications, page 100769, 2024.
[15] Qingxu Fu, Tenghai Qiu, Jianqiang Yi, Zhiqiang Pu, and Xiaolin Ai. Self-
Clustering Hierarchical Multi-Agent Reinforcement Learning with Extensible Co-
operation Graph. arXiv preprint arXiv:2403.18056, 2024.
[16] Mete Yavuz and ¨Omer Cihan Kivanc¸. Optimization of a Cluster-Based Energy
Management System using Deep Reinforcement Learning without Affecting Pro-
sumer Comfort: V2X Technologies and Peer-to-Peer Energy Trading. IEEE Ac-
cess, 2024.
[17] Tianxu Li, Kun Zhu, Nguyen Cong Luong, Dusit Niyato, Qihui Wu, Yang Zhang,
and Bing Chen. Applications of Multi-Agent Reinforcement Learning in Future
Internet: A Comprehensive Survey. IEEE Communications Surveys & Tutorials,
2022.
[18] Le Liang, Hao Ye, and Geoffrey Ye Li. Spectrum sharing in vehicular networks
based on multi-agent reinforcement learning. IEEE Journal on Selected Areas in
Communications, 37(10):2282–2292, 2019.
[19] Xuan Zhang, Hengxi Zhang, Huaze Tang, Le Liang, Ling Cheng, Xinlei Chen,
Wenbo Ding, and Xiao-Ping Zhang. A Scalable Mean-Field MARL Framework
for Multi-Objective V2X Resource Allocation. IEEE Transactions on Intelligent
Vehicles, 2024.
[20] Thanh Thi Nguyen, Ngoc Duy Nguyen, and Saeid Nahavandi. Deep reinforcement
learning for multiagent systems: A review of challenges, solutions, and applica-
tions. IEEE transactions on cybernetics, 50(9):3826–3839, 2020.
[21] Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, and Tamer Basar. Fully
decentralized multi-agent reinforcement learning with networked agents. In Inter-
national Conference on Machine Learning, pages 5872–5881. PMLR, 2018.
[22] Lei Lei, Yue Tan, Kan Zheng, Shiwen Liu, Kuan Zhang, and Xuemin Shen. Deep
reinforcement learning for autonomous internet of things: Model, applications and
challenges. IEEE Communications Surveys & Tutorials, 22(3):1722–1760, 2020.
[23] Tong Wu, Pan Zhou, Binghui Wang, Ang Li, Xueming Tang, Zichuan Xu, Kai
Chen, and Xiaofeng Ding. Joint Traffic Control and Multi-Channel Reassign-
ment for Core Backbone Network in SDN-IoT: A Multi-Agent Deep Reinforce-
ment Learning Approach. IEEE Transactions on Network Science and Engineer-
ing, 2020.
[24] Amal Feriani and Ekram Hossain. Single and Multi-Agent Deep Reinforcement
Learning for AI-Enabled Wireless Networks: A Tutorial. IEEE Communications
Surveys & Tutorials, 2021.
[25] Rose E Wang, Michael Everett, and Jonathan P How. R-maddpg for par-
tially observable environments and limited communication. arXiv preprint
arXiv:2002.06684, 2020.
[26] Zhen Gao, Lei Yang, and Yu Dai. Large-scale Cooperative Task Offloading and Re-
source Allocation in Heterogeneous MEC Systems via Multi-Agent Reinforcement
Learning. IEEE Internet of Things Journal, 2023.
[27] M Pranav, AK Raghavendra, Rudresh S Patil, Kushal V Palankar, and D Anna-
purna. Enhancing 5G Cellular Connectivity: A Comprehensive Analysis of NR
Sidelink with relay. In 2024 IEEE 9th International Conference for Convergence
in Technology (I2CT), pages 1–6. IEEE, 2024.
[28] Muhammad Usman, Marwa Qaraqe, Muhammad Rizwan Asghar, Anteneh A Ge-
bremariam, Imran Shafique Ansari, Fabrizio Granelli, and Qammer H Abbasi. A
business and legislative perspective of V2X and mobility applications in 5G net-
works. IEEE access, 8:67426–67435, 2020.
[29] Bach Long Nguyen, Duy T Ngo, and Hai L Vu. Vehicle Communications for
Infotainment Applications. In Handbook of Real-Time Computing, pages 705–722.
Springer, 2022.
[30] Kai Lin, Chensi Li, Pasquale Pace, and Giancarlo Fortino. Multi-level cluster-
based satellite-terrestrial integrated communication in internet of vehicles. Com-
puter Communications, 149:44–50, 2020.
[31] James Meijers, Panagiotis Michalopoulos, Shashank Motepalli, Gengrui Zhang,
Shiquan Zhang, Andreas Veneris, and Hans-Arno Jacobsen. Blockchain for v2x:
Applications and architectures. IEEE Open Journal of Vehicular Technology,
3:193–209, 2022.
[32] Muhammad Shahid Mastoi, Shengxian Zhuang, Hafiz Mudassir Munir, Malik
Haris, Mannan Hassan, Mohammed Alqarni, and Basem Alamri. A study of
charging-dispatch strategies and vehicle-to-grid technologies for electric vehicles
in distribution networks. Energy Reports, 9:1777–1806, 2023.
[33] Abdelkader Mekrache, Abbas Bradai, Emmanuel Moulay, and Samir Dawaliby.
Deep reinforcement learning techniques for vehicular networks: Recent advances
and future trends towards 6G. Vehicular Communications, 33:100398, 2022.
[34] Haibo Zhou, Wenchao Xu, Jiacheng Chen, and Wei Wang. Evolutionary V2X
technologies toward the Internet of vehicles: Challenges and opportunities. Pro-
ceedings of the IEEE, 108(2):308–323, 2020.
[35] Azim Eskandarian, Chaoxian Wu, and Chuanyang Sun. Research advances and
challenges of autonomous and connected ground vehicles. IEEE Transactions on
Intelligent Transportation Systems, 22(2):683–711, 2019.
[36] P Rajalakshmi et al. Towards 6G V2X Sidelink: Survey of Resource Allocation-
Mathematical Formulations, Challenges, and Proposed Solutions. IEEE Open
Journal of Vehicular Technology, 2024.
[37] Shuangwu Chen, Zhen Yao, Xiaofeng Jiang, Jian Yang, and Lajos Hanzo. Multi-
Agent Deep Reinforcement Learning Based Cooperative Edge Caching for Ultra-
Dense Next-Generation Networks. IEEE Transactions on Communications, 2020.
[38] Le Liang, Hao Ye, and Geoffrey Ye Li. Towards Intelligent Vehicular Networks: A
Machine Learning Framework. IEEE Internet of Things Journal, 2018.
[39] Hao Ye, Le Liang, Geoffrey Ye Li, JoonBeom Kim, Lu Lu, and May Wu. Machine
learning for vehicular networks. arXiv preprint arXiv:1712.07143, 2017.
[40] Wang Tong, Azhar Hussain, Wang Xi Bo, and Sabita Maharjan. Artificial Intelli-
gence for Vehicle-to-Everything: A Survey. IEEE Access, 7:10823–10843, 2019.
[41] Xiangyu Zhao, Long Xia, Liang Zhang, Zhuoye Ding, Dawei Yin, and Jiliang Tang.
Deep reinforcement learning for page-wise recommendations. In Proceedings of
the 12th ACM Conference on Recommender Systems, pages 95–103, 2018.
[42] Helin Yang, Xianzhong Xie, and Michel Kadoch. Intelligent Resource Man-
agement Based on Reinforcement Learning for Ultra-Reliable and Low-Latency
IoV Communication Networks. IEEE Transactions on Vehicular Technology,
68(5):4157–4169, 2019.
[43] Frans A Oliehoek. Decentralized pomdps. In Reinforcement Learning, pages 471–
503. Springer, 2012.
[44] Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially ob-
servable mdps. In 2015 AAAI Fall Symposium Series, 2015.
[45] Kang Tan, Duncan Bremner, Julien Le Kernec, Lei Zhang, and Muhammad Imran.
Machine learning in vehicular networking: An overview. Digital Communications
and Networks, 8(1):18–24, 2022.
[46] Furong Chai, Qi Zhang, Haipeng Yao, Xiangjun Xin, Fu Wang, Minrui Xu, Ze-
hui Xiong, and Dusit Niyato. Multi-Agent DDPG based Resource Allocation in
NOMA-enabled Satellite IoT. IEEE Transactions on Communications, 2024.
[47] Amjad Iqbal, Mau-Luen Tham, Yi Jie Wong, Gabriel Wainer, Yong Xu Zhu, Tasos
Dagiuklas, et al. Empowering Non-Terrestrial Networks with Artificial Intelli-
gence: A Survey. IEEE Access, 2023.
[48] Rana Muhammad Sohaib, Oluwakayode Onireti, Yusuf Sambo, Rafiq Swash, and
Muhammad Imran. Energy Efficient Resource Allocation Framework Based on
Dynamic Meta-Transfer Learning for V2X Communications. IEEE Transactions
on Network and Service Management, 2024.
[49] Dingbang Liu, Fenghui Ren, Jun Yan, Guoxin Su, Wen Gu, and Shohei Kato. Scal-
ing up multi-agent reinforcement learning: An extensive survey on scalability is-
sues. IEEE Access, 2024.
[50] Afshin Oroojlooy and Davood Hajinezhad. A review of cooperative multi-agent
deep reinforcement learning. Applied Intelligence, 53(11):13677–13722, 2023.
[51] Junya Ikemoto and Toshimitsu Ushio. Deep reinforcement learning under signal
temporal logic constraints using Lagrangian relaxation. IEEE Access, 10:114814–
114828, 2022.
[52] Wenzhe Li, Hao Luo, Zichuan Lin, Chongjie Zhang, Zongqing Lu, and De-
heng Ye. A survey on transformers in reinforcement learning. arXiv preprint
arXiv:2301.03044, 2023.
[53] Liangshun Wu, Junsuo Qu, Shilin Li, Cong Zhang, Jianbo Du, Xiang Sun, and
Jiehan Zhou. Attention-Augmented MADDPG in NOMA-Based Vehicular Mobile
Edge Computational Offloading. IEEE Internet of Things Journal, 2024.
[54] Yuhang Wang, Ying He, F Richard Yu, Qiuzhen Lin, and Victor CM Leung. Ef-
ficient resource allocation in multi-UAV assisted vehicular networks with security
constraint and attention mechanism. IEEE Transactions on Wireless Communica-
tions, 2022.
[55] Md Noor-A-Rahim, Zilong Liu, Haeyoung Lee, GG Md Nawaz Ali, Dirk Pesch,
and Pei Xiao. A survey on resource allocation in vehicular networks. IEEE Trans-
actions on Intelligent Transportation Systems, 2020.
[56] Junhui Zhao, Fajin Hu, Jiahang Li, and Yiwen Nie. Multi-agent deep reinforce-
ment learning based resource management in heterogeneous V2X networks. Digi-
tal Communications and Networks, 2023.
[57] Muhammad Ikram Ashraf, Mehdi Bennis, Cristina Perfecto, and Walid Saad. Dy-
namic proximity-aware resource allocation in vehicle-to-vehicle (V2V) communi-
cations. In 2016 IEEE Globecom Workshops (GC Wkshps), pages 1–6. IEEE, 2016.
[58] Qian Long, Zihan Zhou, Abhibav Gupta, Fei Fang, Yi Wu, and Xiaolong
Wang. Evolutionary Population Curriculum for Scaling Multi-Agent Reinforce-
ment Learning. arXiv preprint arXiv:2003.10423, 2020.
[59] Ping Xiang, Hangguan Shan, Miao Wang, Zhiyu Xiang, and Zhenguo Zhu. Multi-
agent RL enables decentralized spectrum access in vehicular networks. IEEE
Transactions on Vehicular Technology, 70(10):10750–10762, 2021.
[60] Yuxin Ji, Yu Wang, Haitao Zhao, Guan Gui, Haris Gacanin, Hikmet Sari, and
Fumiyuki Adachi. Multi-Agent Reinforcement Learning Resources Allocation
Method Using Dueling Double Deep Q-Network in Vehicular Networks. IEEE
Transactions on Vehicular Technology, 2023.
[61] Hao Ye and Geoffrey Ye Li. Deep reinforcement learning for resource allocation
in V2V communications. In 2018 IEEE International Conference on Communica-
tions (ICC), pages 1–6. IEEE, 2018.
[62] Hao Zhou, Xiaoyan Wang, Zhi Liu, Yusheng Ji, and Shigeki Yamada. Resource
Allocation for SVC Streaming Over Cooperative Vehicular Networks. IEEE Trans-
actions on Vehicular Technology, 67(9):7924–7936, 2018.
[63] Zhou Su, Yilong Hui, Qichao Xu, Tingting Yang, Jianyi Liu, and Yunjian Jia. An
Edge Caching Scheme to Distribute Content in Vehicular Networks. IEEE Trans-
actions on Vehicular Technology, 2018.
[64] Hao Ye, Geoffrey Ye Li, and Biing-Hwang Fred Juang. Deep reinforcement learn-
ing based resource allocation for V2V communications. IEEE Transactions on
Vehicular Technology, 68(4):3163–3173, 2019.
[65] Yi-Han Xu, Cheng-Cheng Yang, Min Hua, and Wen Zhou. Deep Deterministic
Policy Gradient (DDPG)-Based Resource Allocation Scheme for NOMA Vehicular
Communications. IEEE Access, 8:18797–18807, 2020.
[66] Yueyun Chen, Zhuo Zeng, Taohua Chen, Zushen Liu, and Alan Yang. A Capacity
Improving Scheme in Multi-RSUs Deployed V2I. CMC-COMPUTERS MATERI-
ALS & CONTINUA, 60(2):835–853, 2019.
[67] Vladimir R de Lima and Marcello LR de Campos. Fully Distributed Multi-Agent
Processing Strategy Applied to Vehicular Networks. Vehicular Communications,
page 100806, 2024.
[68] Xiaoqiang Wang, Liangjun Ke, Zhimin Qiao, and Xinghua Chai. Large-scale traffic
signal control using a novel multiagent reinforcement learning. IEEE transactions
on cybernetics, 51(1):174–187, 2020.
[69] Tianshu Chu, Jie Wang, Lara Codec`a, and Zhaojian Li. Multi-agent deep reinforce-
ment learning for large-scale traffic signal control. IEEE Transactions on Intelligent
Transportation Systems, 2019.
[70] 3GPP. Technical Specification Group Services and System Aspects; Service re-
quirements for next generation new services and markets; Stage 1. Technical Re-
port 22.261, 3rd Generation Partnership Project (3GPP), 2024. Version 18.13.0.
[71] Inc. Qualcomm Technologies. 5G Advanced Release 19 Presentation, 2024. Ac-
cessed: 2024-06-05.
[72] Cheolkyu Shin, Emad Farag, Hyunseok Ryu, Miao Zhou, and Younsun Kim.
Vehicle-to-everything (v2x) evolution from 4g to 5g in 3gpp: Focusing on resource
allocation aspects. IEEE Access, 11:18689–18703, 2023.
[73] Anuja Nair and Sudeep Tanwar. Resource allocation in V2X communication:
State-of-the-art and research challenges. Physical Communication, page 102351,
2024.
[74] Marko Angjelichinoski, Kasper Fløe Trillingsgaard, and Petar Popovski. A statis-
tical learning approach to ultra-reliable low latency communication. arXiv preprint
arXiv:1809.05515, 2018.
[75] Zhang Liwen, Faizan Qamar, Mahrukh Liaqat, Mhd Nour Hindia, and Khairul
Akram Zainol Ariffin. Towards Efficient 6G IoT Networks: A Perspective on Re-
source Optimization Strategies, Challenges, and Future Directions. IEEE Access,
2024.
[76] Zhipeng Liu, Yinhui Han, Jianwei Fan, Lin Zhang, and Yunzhi Lin. Joint Opti-
mization of Spectrum and Energy Efficiency Considering the C-V2X Security: A
Deep Reinforcement Learning Approach. arXiv preprint arXiv:2003.10620, 2020.
[77] Marie-Theres Suer, Christoph Thein, Hugues Tchouankem, and Lars Wolf. Multi-
Connectivity as an Enabler for Reliable Low Latency Communications—An
Overview. IEEE Communications Surveys & Tutorials, 22(1):156–169, 2020.
[78] Apostolos Kousaridas, Chan Zhou, David Mart´ın-Sacrist´an, David Garcia-Roger,
Jose F Monserrat, and Sandra Roger. Multi-connectivity management for 5G V2X
communication. In 2019 IEEE 30th Annual International Symposium on Personal,
Indoor and Mobile Radio Communications (PIMRC), pages 1–7. IEEE, 2019.
[79] Alexander Rabitsch, Karl-Johan Grinnemo, Anna Brunstrom, Henrik Abrahams-
son, Fehmi Ben Abdesslem, Stefan Alfredsson, and Bengt Ahlgren. Utilizing
Multi-Connectivity to Reduce Latency and Enhance Availability for Vehicle to In-
frastructure Communication. IEEE Transactions on Mobile Computing, 2020.
[80] Itamar Arel, Cong Liu, T Urbanik, and AG Kohls. Reinforcement learning-based
multi-agent system for network traffic signal control. IET Intelligent Transport
Systems, 4(2):128–135, 2010.
[81] Qingmiao Zhang, Lidong Zhu, Yanyan Chen, and Shan Jiang. Energy-efficient
traffic offloading for RSMA-based hybrid satellite terrestrial networks with deep
reinforcement learning. China Communications, 21(2):49–58, 2024.
[82] Jun Wang, Daquan Feng, Shengli Zhang, Jianhua Tang, and Tony QS Quek. Com-
putation offloading for mobile edge computing enabled vehicular networks. IEEE
Access, 7:62624–62632, 2019.
[83] Yinlin Ren, Xingyu Chen, Song Guo, Shaoyong Guo, and Ao Xiong. Blockchain-
Based VEC Network Trust Management: A DRL Algorithm for Vehicular Ser-
vice Offloading and Migration. IEEE Transactions on Vehicular Technology,
70(8):8148–8160, 2021.
[84] Lin Yao, Ailun Chen, Jing Deng, Jianbang Wang, and Guowei Wu. A cooperative
caching scheme based on mobility prediction in vehicular content centric networks.
IEEE Transactions on Vehicular Technology, 67(6):5435–5444, 2018.
[85] Degan Zhang, Hui Ge, Ting Zhang, Yu-Ya Cui, Xiaohuan Liu, and Guoqiang Mao.
New Multi-Hop Clustering Algorithm for Vehicular Ad Hoc Networks. IEEE
Transactions on Intelligent Transportation Systems, 20(4):1517–1530, Apr 2019.
[86] Mengying Ren, Jun Zhang, Lyes Khoukhi, Houda Labiod, and V´eronique V`eque.
A review of clustering algorithms in VANETs. Annals of Telecommunications,
pages 1–23, 2021.
[87] Kang Tan, Julien Le Kernec, Muhammad Imran, and Duncan Bremner. Clustering
algorithm in vehicular ad-hoc networks: A brief summary. In 2019 UK/China
Emerging Technologies (UCET), pages 1–5. IEEE, 2019.
[88] Jiahui Li, Geng Sun, Qingqing Wu, Dusit Niyato, Jiawen Kang, Abbas Jamalipour,
and Victor Leung. Collaborative Ground-Space Communications via Evolutionary
Multi-objective Deep Reinforcement Learning. arXiv preprint arXiv:2404.07450,
2024.
[89] Bo Guo, Liwei Deng, Ruisheng Wang, Wenchao Guo, Alex Hay-Man Ng, and
Wenfeng Bai. MCTNet: Multiscale Cross-attention based Transformer Network
for Semantic Segmentation of Large-scale Point Cloud. IEEE Transactions on
Geoscience and Remote Sensing, 2023.
[90] Ahlem Masmoudi, Kais Mnif, and Faouzi Zarai. A survey on radio resource allo-
cation for V2X communication. Wireless Communications and Mobile Computing,
2019, 2019.
[91] Waleed Ahsan, Wenqiang Yi, Zhijin Qin, Yuanwei Liu, and Arumugam Nal-
lanathan. Resource allocation in uplink NOMA-IoT networks: a reinforcement-
learning approach. IEEE Transactions on Wireless Communications, 20(8):5083–
5098, 2021.
[92] Gagangeet Singh Aujla, Rajat Chaudhary, Neeraj Kumar, Joel JPC Rodrigues,
and Alexey Vinel. Data offloading in 5G-enabled software-defined vehicular net-
works: A stackelberg-game-based approach. IEEE Communications Magazine,
55(8):100–108, 2017.
[93] Ke Zhang, Yuming Mao, Supeng Leng, Yejun He, and Yan Zhang. Mobile-edge
computing for vehicular networks: A promising network paradigm with predictive
off-loading. IEEE Vehicular Technology Magazine, 12(2):36–44, 2017.
[94] Thanh Thi Nguyen, Ngoc Duy Nguyen, and Saeid Nahavandi. Deep Reinforcement Learning for Multi-Agent Systems: A Review of Challenges, Solutions and Applications. arXiv preprint arXiv:1812.11794, 2018.
[95] Zhaojun Lu, Gang Qu, and Zhenglin Liu. A Survey on Recent Advances in Vehicular Network Security, Trust, and Privacy. IEEE Transactions on Intelligent Transportation Systems, 20(2), 2019.
[96] Marco Giordani, Takayuki Shimizu, Andrea Zanella, Takamasa Higuchi, Onur Al-tintas, and Michele Zorzi. Path loss models for V2V mmWave communication: Performance evaluation and open challenges. In 2019 IEEE 2nd Connected and Automated Vehicles Symposium (CAVS), pages 1–5. IEEE, 2019.
[97] 3GPP. Study on channel model for frequencies from 0.5 to 100 GHz (Release 16). Technical report, 3GPP, Dec. 2019.
[98] P Series. Propagation data and prediction methods required for the design of Earth-space telecommunication systems. Recommendation ITU-R P.618-13, 2017.
[99] Po-Yen Chen, Yu-Heng Zheng, Ibrahim Althamary, Jann-Long Chern, and Chih-Wei Huang. Multi-Agent Deep Reinforcement Learning for Spectrum Management in V2X with Social Roles. In IEEE Global Communications Conference (GLOBECOM), Kuala Lumpur, Malaysia, Dec 2023.
[100] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need.
Advances in neural information processing systems, 30, 2017.
[101] Hao Lin, Yue He, Fanzhang Li, Quan Liu, Bangjun Wang, and Fei Zhu. Taking complementary advantages: Improving exploration via double self-imitation learning in procedurally-generated environments. Expert Systems with Applications, 238:122145, 2024.
[102] Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. Self-imitation learning. In International Conference on Machine Learning, pages 3878–3887. PMLR, 2018.
[103] Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip HS Torr, Pushmeet Kohli, and Shimon Whiteson. Stabilising experience replay for deep multi-agent reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1146–1155. JMLR. org, 2017.
[104] Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2623–2631, 2019.
[105] North American Aerospace Defense Command (NORAD). Starlink Satellite Elements. http://celestrak.com/NORAD/elements/, 2023. Accessed: August 2023.
[106] Seyedali Mirjalili. Evolutionary algorithms and neural networks: Theory and applications, volume 780. Springer, 2018.
[107] Charles W Chase Jr. Innovations in business forecasting. The Journal of Business Forecasting, 33(3):22, 2014.
[108] Antonio Frangioni, Bernard Gendron, and Enrico Gorgone. Dynamic smoothness parameter for fast gradient methods. Optimization Letters, 12:43–53, 2018.
[109] Ibrahim Althamary, Jun-Yong Lin, and Chih-Wei Huang. Spectrum Management with Congestion Avoidance for V2X Based on Multi-Agent Reinforcement Learning. In 2020 IEEE Globecom Workshops (GC Wkshps), pages 1–6. IEEE, 2020.
[110] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928–1937, 2016.
[111] Zheng Li and Caili Guo. Multi-agent deep reinforcement learning based spectrum allocation for D2D underlay communications. IEEE Transactions on Vehicular Technology, 69(2):1828–1840, 2019.
[112] Xiaonan Liu and Yansha Deng. Learning-based Prediction, Rendering and Association Optimization for MEC-enabled Wireless Virtual Reality (VR) Network. IEEE Transactions on Wireless Communications, 2021.
[113] Pablo Alvarez Lopez, Michael Behrisch, Laura Bieker-Walz, Jakob Erdmann, YunPang Fl¨otter¨od, Robert Hilbrich, Leonhard L¨ucken, Johannes Rummel, Peter Wagner, and Evamarie Wießner. Microscopic traffic simulation using sumo. In The 21st IEEE International Conference on Intelligent Transportation Systems. IEEE, 2018.
[114] Lixia Xue, Yuchen Yang, and Decun Dong. Roadside infrastructure planning scheme for the urban vehicular networks. Transportation Research Procedia, 25:1380–1396, 2017.
[115] Anran Du, Yicheng Shen, and Lewis Tseng. CarML: distributed machine learning in vehicular clouds. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, pages 1–3, 2020. |