參考文獻 |
[1] Hanbyul Seo, Ki-Dong Lee, Shinpei Yasukawa, Ying Peng, and Philippe Sartori. Lte evolution for vehicle-to-everything services. IEEE Communications Magazine, 54(6):22–28, 2016.
[2] Carlos Renato Storck and Fa´tima Duarte-Figueiredo. A survey of 5g technology evolution, standards, and infrastructure associated with vehicle-to-everything com- munications by internet of vehicles. IEEE Access, 8:117593–117614, 2020.
[3] F. Lisi, G. Losquadro, A. Tortorelli, A. Ornatelli, and M. Donsante. Multi- connectivity in 5g terrestrial-satellite networks: the 5g-allstar solution, 2020.
[4] Subramanya Chandrashekar, Andreas Maeder, Cinzia Sartori, Thomas Ho¨hne, Benny Vejlgaard, and Devaki Chandramouli. 5g multi-rat multi-connectivity ar- chitecture. In 2016 IEEE International Conference on Communications Workshops (ICC), pages 180–186, 2016.
[5] Hao Ye and Geoffrey Ye Li. Deep reinforcement learning based distributed resource allocation for v2v broadcasting. In 2018 14th International Wireless Communica- tions Mobile Computing Conference (IWCMC), pages 440–445, 2018.
[6] Hao Ye, Geoffrey Ye Li, and Biing-Hwang Fred Juang. Deep reinforcement learning based resource allocation for v2v communications. IEEE Transactions on Vehicular Technology, 68(4):3163–3173, 2019.
[7] Le Liang, Hao Ye, and Geoffrey Ye Li. Toward intelligent vehicular networks: A machine learning framework. IEEE Internet of Things Journal, 6(1):124–135, 2019.
[8] Liang Wang, Hao Ye, Le Liang, and Geoffrey Ye Li. Learn to compress csi and allocate resources in vehicular networks. IEEE Transactions on Communications, 68(6):3640–3653, 2020.
[9] Le Liang, Hao Ye, and Geoffrey Ye Li. Spectrum sharing in vehicular networks based on multi-agent reinforcement learning. IEEE Journal on Selected Areas in Communications, 37(10):2282–2292, 2019.
[10] Helin Yang, Xianzhong Xie, and Michel Kadoch. Intelligent resource management based on reinforcement learning for ultra-reliable and low-latency iov communi- cation networks. IEEE Transactions on Vehicular Technology, 68(5):4157–4169, 2019.
[11] Min Zhao, Yifei Wei, Mei Song, and Guo Da. Power control for d2d communi- cation using multi-agent reinforcement learning. In 2018 IEEE/CIC International Conference on Communications in China (ICCC), pages 563–567, 2018.
[12] Zheng Li, Caili Guo, and Yidi Xuan. A multi-agent deep reinforcement learning based spectrum allocation framework for d2d communications. In 2019 IEEE Global Communications Conference (GLOBECOM), pages 1–6, 2019.
[13] Dohyun Kwon and Joongheon Kim. Multi-agent deep reinforcement learning for cooperative connected vehicles. In 2019 IEEE Global Communications Conference (GLOBECOM), pages 1–6, 2019.
[14] Rose E. Wang, Michael Everett, and Jonathan P. How. R-maddpg for partially ob- servable environments and limited communication, 2020.
[15] Shayegan Omidshafiei, Jason Pazis, Christopher Amato, Jonathan P. How, and John Vian. Deep decentralized multi-task multi-agent reinforcement learning under par- tial observability, 2017.
[16] Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, and Shimon White- son. Learning to communicate to solve riddles with deep distributed recurrent q- networks, 2016.
[17] Olga Galinina, Alexander Pyattaev, Sergey Andreev, Mischa Dohler, and Yevgeni Koucheryavy. 5g multi-rat lte-wifi ultra-dense small cells: Performance dynam- ics, architecture, and trends. IEEE Journal on Selected Areas in Communications, 33(6):1224–1240, 2015.
[18] Peng Wang, Jiaxin Zhang, Xing Zhang, Zhi Yan, Barry G. Evans, and Wenbo Wang. Convergence of satellite and terrestrial networks: A comprehensive survey. IEEE Access, 8:5550–5588, 2020.
[19] Gaofeng Cui, Yating Long, Lexi Xu, and Weidong Wang. Joint offloading and re- source allocation for satellite assisted vehicle-to-vehicle communication. IEEE Sys- tems Journal, pages 1–12, 2020.
[20] Janne Janhunen, Johanna Ketonen, Ari Hulkkonen, Juha Ylitalo, Antti Roivainen, and Markku Juntti. Satellite uplink transmission with terrestrial network interfer- ence. In 2015 IEEE Global Communications Conference (GLOBECOM), pages 1–6, 2015.
[21] Vincent Deslandes, Je´roˆme Tronc, and Andre´-Luc Beylot. Analysis of interference issues in integrated satellite and terrestrial mobile systems. In 2010 5th Advanced Satellite Multimedia Systems Conference and the 11th Signal Processing for Space Communications Workshop, pages 256–261, 2010.
[22] Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. A brief survey of deep reinforcement learning. CoRR, abs/1708.05866, 2017.
[23] Jianqing Fan, Zhaoran Wang, Yuchen Xie, and Zhuoran Yang. A theoretical analysis of deep q-learning. In Alexandre M. Bayen, Ali Jadbabaie, George Pappas, Pablo A. Parrilo, Benjamin Recht, Claire Tomlin, and Melanie Zeilinger, editors, Proceed- ings of the 2nd Conference on Learning for Dynamics and Control, volume 120 of Proceedings of Machine Learning Research, pages 486–489, The Cloud, 10–11 Jun 2020. PMLR.
[24] John N. Tsitsiklis Vijay R. Konda. Actor-critic algorithms. Advances in neural information processing systems, page 1008–1014, 2000.
[25] 3GPP. Technical specification group radio access network; solutions for nr to support non-terrestrial networks (ntn). 2019-12.
[26] C. Kourogiorgas, D. Tarchi, A. Ugolini, P. D. Arapoglou, A. D. Panagopoulos,
G. Colavolpe, and A. Vanelli Coralli. System capacity evaluation of dvb-s2x based medium earth orbit satellite network operating at ka band. In 2016 8th Advanced Satellite Multimedia Systems Conference and the 14th Signal Processing for Space Communications Workshop (ASMS/SPSC), pages 1–8, 2016. |