參考文獻 |
[1] J. Park, P. Popovski, and O. Simeone, “Minimizing latency to support vr social in-teractions over wireless cellular systems via bandwidth allocation,”IEEE WirelessCommunications Letters, vol. 7, no. 5, pp. 776–779, Oct 2018.
[2] L. Wang, L. Jiao, T. He, J. Li, and M. M ̈uhlh ̈auser, “Service entity placement forsocial virtual reality applications in edge computing,”IEEE INFOCOM 2018 - IEEEConference on Computer Communications, pp. 468–476, 2018.
[3] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, andD. Wierstra, “Continuous control with deep reinforcement learning,” 2015.
[4] S.-C. Tseng, Z.-W. Liu, Y.-C. Chou, and C.-W. Huang, “Radio Resource Schedulingfor 5G NR via Deep Deterministic Policy Gradient,” in2019 IEEE InternationalConference on Communications Workshops (ICC Workshops), Shanghai, China,May 2019, pp. 1–6.
[5] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare,A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovskiet al., “Human-level con-trol through deep reinforcement learning,”nature, vol. 518, no. 7540, pp. 529–533,2015.
[6] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptationof deep networks,”arXiv preprint arXiv:1703.03400, 2017.
[7] J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,”inAdvances in neural information processing systems, 2017, pp. 4077–4087.
[8] G. Koch, R. Zemel, and R. Salakhutdinov, “Siamese neural networks for one-shotimage recognition,” inICML deep learning workshop, vol. 2. Lille, 2015.
[9] J. X. Wang, Z. Kurth-Nelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, C. Blun-dell, D. Kumaran, and M. Botvinick, “Learning to reinforcement learn,”arXivpreprint arXiv:1611.05763, 2016.
[10] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel,“Rl2: Fast reinforcement learning via slow reinforcement learning,”arXiv preprintarXiv:1611.02779, 2016.
[11] A. Gupta, R. Mendonca, Y. Liu, P. Abbeel, and S. Levine, “Meta-reinforcementlearning of structured exploration strategies,” inAdvances in Neural InformationProcessing Systems, 2018, pp. 5302–5311.
[12] T. Xu, Q. Liu, L. Zhao, and J. Peng, “Learning to explore via meta-policy gradient,”inInternational Conference on Machine Learning, 2018, pp. 5463–5472.
[13] M. Chen, W. Saad, C. Yin, and M. Debbah, “Data correlation-aware resource man-agement in wireless virtual reality (vr): An echo state transfer learning approach,”IEEE Transactions on Communications, vol. 67, no. 6, pp. 4267–4280, 2019.
[14] Y. Zhang, L. Jiao, J. Yan, and X. Lin, “Dynamic service placement for virtual re-ality group gaming on mobile edge cloudlets,”IEEE Journal on Selected Areas inCommunications, vol. 37, no. 8, pp. 1881–1897, 2019.[15] F. Guo, L. Ma, H. Zhang, H. Ji, and X. Li, “Joint load management and resourceallocation in the energy harvesting powered small cell networks with mobile edgecomputing,” inIEEE INFOCOM 2018 - IEEE Conference on Computer Communi-cations Workshops (INFOCOM WKSHPS), 2018, pp. 299–304.
[16] H. Ahmadi, O. Eltobgy, and M. Hefeeda, “Adaptive multicast streaming of virtualreality content to mobile users,” inProceedings of the on Thematic Workshopsof ACM Multimedia 2017, ser. Thematic Workshops ’17.New York, NY, USA:Association for Computing Machinery, 2017, p. 170–178. [Online]. Available:https://doi.org/10.1145/3126686.3126743
[17] J. Yang, J. Luo, D. Meng, and J. Hwang, “Qoe-driven resource allocation optimizedfor uplink delivery of delay-sensitive vr video over cellular network,”IEEE Access,vol. 7, pp. 60 672–60 683, 2019.
[18] X. Yang, Z. Chen, K. Li, Y. Sun, N. Liu, W. Xie, and Y. Zhao, “Communication-constrained mobile edge computing systems for wireless virtual reality: Schedulingand tradeoff,”IEEE Access, vol. 6, pp. 16 665–16 677, 2018.
[19] Y. Mori, N. Fukushima, T. Fujii, and M. Tanimoto, “View generation with 3d warp-ing using depth information for ftv,” in2008 3DTV Conference: The True Vision -Capture, Transmission and Display of 3D Video, May 2008, pp. 229–232.
[20] H. Mao, M. Alizadeh, I. Menache, and S. Kandula, “Resource Management withDeep Reinforcement Learning,” inProceedings of the 15th ACM Workshop on HotTopics in Networks - HotNets ’16. New York, New York, USA: ACM Press, 2016,pp. 50–56.
[21] G. Dulac-Arnold, R. Evans, H. van Hasselt, P. Sunehag, T. Lillicrap, J. Hunt,T. Mann, T. Weber, T. Degris, and B. Coppin, “Deep reinforcement learning in largediscrete action spaces,”arXiv preprint arXiv:1512.07679, 2015.
[22] M.-T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-basedneural machine translation,”arXiv preprint arXiv:1508.04025, 2015.
[23] Y.-C. Chen, Y.-T. Lin, and C.-W. Huang, “A hybrid scenario generator and its ap-plication on network simulations,” in2020 IEEE International Conference on Con-sumer Electronics - Taiwan (ICCE-Taiwan) (2020 IEEE ICCE-Taiwan), Taoyuan,Taiwan, Sep. 2020.
[24] Mojang, “Minecraft,” https://www.minecraft.net, 2009.
[25] “Craftbukkit,” https://getbukkit.org, 2009.[26] Xikage,“Mythicmobs,”https://www.spigotmc.org/resources/mythicmobs-free-version-the-1-custom-mob-creator.5702, 2015.
[27] 3GPP TS 23.501, “System Architecture for the 5G System.” |