參考文獻 |
[1] W. Feng, N. Zhang, S. Li, S. Lin, R. Ning, S. Yang, and Y. Gao, “Latency minimization of reverse offloading in vehicular edge computing,” IEEE Transactions on
Vehicular Technology, vol. 71, no. 5, pp. 5343–5357, 2022.
[2] A. T. Jawad, R. Maaloul, and L. Chaari, “A multi-agent reinforcement learning-based
approach for uav-assisted vehicle-to-everything network,” in 2023 9th International
Conference on Control, Decision and Information Technologies (CoDIT), 2023, pp.
123–129.
[3] W. Zhan, C. Luo, J. Wang, C. Wang, G. Min, H. Duan, and Q. Zhu, “Deepreinforcement-learning-based offloading scheduling for vehicular edge computing,”
IEEE Internet of Things Journal, vol. 7, no. 6, pp. 5449–5465, 2020.
[4] D. Pliatsios, P. Sarigiannidis, T. D. Lagkas, V. Argyriou, A.-A. A. Boulogeorgos,
and P. Baziana, “Joint wireless resource and computation offloading optimization for
energy efficient internet of vehicles,” IEEE Transactions on Green Communications
and Networking, vol. 6, no. 3, pp. 1468–1480, 2022.
[5] M. Liwang, J. Wang, Z. Gao, X. Du, and M. Guizani, “Game theory based opportunistic computation offloading in cloud-enabled iov,” IEEE Access, vol. 7, pp.
32 551–32 561, 2019.
[6] Q. Wu, H. Ge, H. Liu, Q. Fan, Z. Li, and Z. Wang, “A task offloading scheme in
vehicular fog and cloud computing system,” IEEE Access, vol. 8, pp. 1173–1184,
2020.
[7] C. Wu, Z. Huang, and Y. Zou, “Delay constrained hybrid task offloading of internet of
vehicle: A deep reinforcement learning method,” IEEE Access, vol. 10, pp. 102 778–
102 788, 2022.
[8] K. Wang, X. Wang, X. Liu, and A. Jolfaei, “Task offloading strategy based on reinforcement learning computing in edge computing architecture of internet of vehicles,”
IEEE Access, vol. 8, pp. 173 779–173 789, 2020.
[9] C. Chen, L. Chen, L. Liu, S. He, X. Yuan, D. Lan, and Z. Chen, “Delay-optimized
v2v-based computation offloading in urban vehicular edge computing and networks,”
IEEE Access, vol. 8, pp. 18 863–18 873, 2020.
[10] X. Dai, Z. Xiao, H. Jiang, H. Chen, G. Min, S. Dustdar, and J. Cao, “A learning-based
approach for vehicle-to-vehicle computation offloading,” IEEE Internet of Things
Journal, vol. 10, no. 8, pp. 7244–7258, 2023.
[11] R.-H. Hwang, M. M. Islam, M. A. Tanvir, M. S. Hossain, and Y.-D. Lin, “Communication and computation offloading for 5g v2x: Modeling and optimization,” in
GLOBECOM 2020 - 2020 IEEE Global Communications Conference, 2020, pp. 1–6.
[12] H. Ge, X. Song, S. Ma, L. Liu, S. Li, X. Cheng, T. Zhou, and H. Feng, “Task
offloading algorithm in edge computing based on dqn,” in 2022 4th International
Conference on Natural Language Processing (ICNLP), 2022, pp. 482–488.
[13] S. Birhanu Engidayehu, T. Mahboob, and M. Young Chung, “Deep reinforcement
learning-based task offloading and resource allocation in mec-enabled wireless networks,” in 2022 27th Asia Pacific Conference on Communications (APCC), 2022,
pp. 226–230.
[14] J. Chen, Z. Wu, L. Wu, Y. Xia, Y. Wang, L. Xiong, and C. Shi, “Hybrid decision
based multi-agent deep reinforcement learning for task offloading in collaborative
edge-cloud computing,” in 2022 IEEE 24th Int Conf on High Performance Computing
& Communications; 8th Int Conf on Data Science & Systems; 20th Int Conf on
Smart City; 8th Int Conf on Dependability in Sensor, Cloud & Big Data Systems &
Application (HPCC/DSS/SmartCity/DependSys). IEEE, 2022, pp. 228–235.
[15] P.-D. Nguyen and L. B. Le, “Joint computation offloading, sfc placement, and resource allocation for multi-site mec systems,” in 2020 IEEE Wireless Communications and Networking Conference (WCNC), 2020, pp. 1–6.
[16] Y. Li, C. Yang, M. Deng, X. Tang, and W. Li, “A dynamic resource optimization
scheme for mec task offloading based on policy gradient,” in 2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC), vol. 6, 2022,
pp. 342–345.
[17] P. Teymoori and A. Boukerche, “Dynamic multi-user computation offloading for
mobile edge computing using game theory and deep reinforcement learning,” in ICC
2022 - IEEE International Conference on Communications, 2022, pp. 1930–1935.
[18] Y. Wang, P. Lang, D. Tian, J. Zhou, X. Duan, Y. Cao, and D. Zhao, “A game-based
computation offloading method in vehicular multiaccess edge computing networks,”
IEEE Internet of Things Journal, vol. 7, no. 6, pp. 4987–4996, 2020.
[19] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading
for mobile-edge cloud computing,” IEEE/ACM transactions on networking, vol. 24,
no. 5, pp. 2795–2808, 2015.
[20] M. Dai, Z. Su, Q. Xu, and N. Zhang, “Vehicle assisted computing offloading for
unmanned aerial vehicles in smart city,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1932–1944, 2021.
[21] H. Wang, Z. Lin, K. Guo, and T. Lv, “Computation offloading based on game theory
in mec-assisted v2x networks,” in 2021 IEEE International Conference on Communications Workshops (ICC Workshops), 2021, pp. 1–6.
[22] ——, “Computation offloading based on game theory in mec-assisted v2x networks,”
in 2021 IEEE International Conference on Communications Workshops (ICC Workshops), 2021, pp. 1–6.
[23] G. Jain, A. Kumar, and S. A. Bhat, “Recent developments of game theory and
reinforcement learning approaches: A systematic review,” IEEE Access, vol. 12, pp.
9999–10 011, 2024.
[24] T. Rappaport, Wireless Communications: Principles and Practice, 2nd ed. USA:
Prentice Hall PTR, 2001.
[25] S. Jošilo and G. Dán, “Decentralized algorithm for randomized task allocation in fog
computing systems,” IEEE/ACM Transactions on Networking, vol. 27, no. 1, pp.
85–97, 2019.
[26] Z. Xiao, X. Dai, H. Jiang, D. Wang, H. Chen, L. Yang, and F. Zeng, “Vehicular
task offloading via heat-aware mec cooperation using game-theoretic method,” IEEE
Internet of Things Journal, vol. 7, no. 3, pp. 2038–2052, 2020.
[27] H. Wang, Z. Lin, K. Guo, and T. Lv, “Energy and delay minimization based on game
theory in mec-assisted vehicular networks,” in 2021 IEEE International Conference
on Communications Workshops (ICC Workshops), 2021, pp. 1–6.
[28] X. Hu, S. Xu, L. Wang, Y. Wang, Z. Liu, L. Xu, Y. Li, and W. Wang, “A joint
power and bandwidth allocation method based on deep reinforcement learning for
v2v communications in 5g,” China Communications, vol. 18, no. 7, pp. 25–35, 2021.
[29] D. Monderer and L. S. Shapley, “Potential games,” Games and Economic
Behavior, vol. 14, no. 1, pp. 124–143, 1996. [Online]. Available: https:
//www.sciencedirect.com/science/article/pii/S0899825696900445
[30] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch, “Multi-agent actorcritic for mixed cooperative-competitive environments,” in Proceedings of the 31st
International Conference on Neural Information Processing Systems, ser. NIPS’17.
Red Hook, NY, USA: Curran Associates Inc., 2017, p. 6382–6393.
[31] L. Wang, K. Wang, C. Pan, W. Xu, N. Aslam, and L. Hanzo, “Multi-agent deep reinforcement learning-based trajectory planning for multi-uav assisted mobile edge computing,” IEEE Transactions on Cognitive Communications and Networking, vol. 7,
no. 1, pp. 73–84, 2021.
[32] J. J. Ackermann, V. Gabler, T. Osa, and M. Sugiyama, “Reducing overestimation
bias in multi-agent domains using double centralized critics,” ArXiv, vol. abs/
1910.01465, 2019. [Online]. Available: https://api.semanticscholar.org/CorpusID:
203642167
[33] T. Rappaport, Wireless Communications: Principles and Practice. Cambridge
University Press, 2024. [Online]. Available: https://books.google.com.tw/books?id=
X3r5EAAAQBAJ |