參考文獻 |
[1] S. Jacob et al., “A novel spectrum sharing scheme using dynamic long short-term memory with CP-OFDMA in 5G networks,” IEEE Trans. Cogn. Commun. Netw., vol. 6, no. 3, pp. 926–934, Sep. 2020. [Online]. Available: https://doi.org/10.1109/TCCN.2020.2970697
[2] NGMN Alliance 5G White Paper. [Online]. Available: https://www.ngmn.org/5g-white-paper/5g-white-paper.html
[3] 3GPP TS 28.530: ‘Management and orchestration; Concepts, use cases and requirements’
[4] 3GPP TS 23.501: ‘System Architecture for the 5G System’
[5] W.-K. Chen, Y.-F. Liu, Y.-H. Dai, and Z.-Q. Luo, “An efficient linear programming rounding-and-refinement algorithm for large-scale network slicing problem,” in Proc. 46th IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), Toronto, ON, Canada, Jun. 2021, pp. 4735–4739.
[6] X. Li, J. Rao, H. Zhang, and A. Callard, “Network slicing with elastic SFC,” in Proc. IEEE 86th Veh. Technol. Conf. (VTC-Fall), 2017, pp. 1–5.
[7] R. S. Sutton and A. G. Barto, “Reinforcement learning: An introduction,” Robotica, vol. 17, no. 2, pp. 229–235, 1999.
[8] M. Jiang, M. Condoluci, and T. Mahmoodi, “Network Slicing Management & Prioritization in 5G Mobile Systems,” Euro. Wireless 2016, Oulu, Finland, 2016, pp. 1–6.
[9] A. Perveen, M. Patwary, and A. Aneiba, “Dynamically reconfigurable slice allocation and admission control within 5G wireless networks,” in Proc. IEEE 89th Veh. Technol. Conf., 2019, pp. 1–7.
[10] A. Kammoun, N. Tabbane, G. Diaz, and N. Achir, “Admission control algorithm for network slicing management in SDN-NFV environment,” in Proc. 6th Int. Conf. Multimedia Comput. Syst. (ICMCS), 2018, pp. 1–6
[11] P. Caballero, A. Banchs, G. de Veciana, X. Costa-Pérez, and A. Azcorra, ‘‘Network slicing for guaranteed rate services: Admission control and resource allocation games,’’ IEEE Trans. Wireless Commun., vol. 17, no. 10, pp. 6419–6432, Oct. 2018
[12] H. M. Soliman and A. Leon-Garcia, “QoS-aware frequency-space network slicing and admission control for virtual wireless networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Washington, DC, USA, Dec. 2016, pp. 1–6.
[13] B. Han et al., “Admission and congestion control for 5G network slicing,” in Proc. IEEE Conf. Stand. Commun. Netw. (CSCN), 2018, pp. 1–6
[14] M. R. Raza, C. Natalino, P. Öhlen, L. Wosinska, and P. Monti, “A slice admission policy based on reinforcement learning for a 5G flexible RAN,” in Proc. Eur. Conf. Opt. Commun. (ECOC), 2018, pp. 1–3.
[15] C. Natalino, M. R. Raza, A. Rostami, P. Ohlen, L. Wosinka, and P. Monti, “Machine learning aided resource orchestration in multi-tenant networks,” in Proc. IEEE Photon. Summer Top. Meeting, Jul. 2018, doi: 10.1109/PHOSST.2018.8456735.
[16] H. Tong and T. X. Brown, “Adaptive call admission control under quality of service constraints: a reinforcement learning solution,” IEEE J. Sel. Areas Commun., vol. 18, no. 2, pp. 209–221, Feb. 2000.
[17] D. Bega et al., “A Machine Learning Approach to 5G Infrastructure Market Optimization,” IEEE Trans. Mobile Computing, vol. 19, no. 3, Feb. 2020, pp. 498–512.
[18] R. Li et al., “Deep reinforcement learning for resource management in network slicing,” IEEE Access, vol. 6, pp. 74429–74441, 2018.
[19] G. Sun, K. Xiong, G. O. Boateng, D. Ayepah-Mensah, G. Liu, and W. Jiang, “Autonomous resource provisioning and resource customization for mixed traffics in virtualized radio access network,” IEEE Syst. J., vol. 13, no. 3, pp. 2454–2465, Sep. 2019.
[20] A. Namdari and Z. S. Li, “An entropy-based approach for modeling lithium-ion battery capacity fade,” in Proc. Annu. Rel. Maintainability Symp. (RAMS), Jan. 2020, pp. 1–7, doi: 10.1109/RAMS48030.2020.9153698.
[21] J. Lee and M. Mitici, “Deep reinforcement learning for predictive aircraft maintenance using probabilistic remaining-useful-life prognostics,” Rel. Eng. Syst. Saf., vol. 230, Feb. 2023, Art. no. 108908.
[22] A. Namdari and Z. S. Li, “A multiscale entropy-based long short term memory model for lithium-ion battery prognostics,” in Proc. IEEE ICPHM, Detroit, MI, USA, Jun. 2021, pp. 1–6, doi: 10.1109/ICPHM51084.2021.9486674.
[23] A. Krizhevsky, I. Sutskever and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 25, pp. 1097-1105, 2012.
[24] S. Krishna Kumar, “On weight initialization in deep neural networks,” arXiv:1704.08863, 2017, [online] Available: http://arxiv.org/abs/1704.08863.
[25] A. M. Saxe, J. L. McClelland and S. Ganguli, “Exact solutions to the nonlinear dynamics of learning in deep linear neural networks,” Dec. 2013, [online] Available: https://arxiv.org/abs/1312.6120.
[26] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta and Y. Bengio, “FitNets: Hints for Thin Deep Nets,” Dec. 2014, [online] Available: https://arxiv.org/abs/1412.6550.
[27] B. Hanin and D. Rolnick, “How to start training: The effect of initialization and architecture,” Advances in Neural Information Processing Systems, vol. 31, pp. 571-581, 2018, [online] Available: https://proceedings.neurips.cc/paper/2018/file/d81f9c1be2e08964bf9f24b15f0e4900Paper.pdf.
[28] K. He, X. Zhang, S. Ren and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026-1034, Dec. 2015.
[29] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proc. 13th Int. Conf. Artif. Intell. Statist., 2010, pp. 249–256.
[30] D. Mishkin and J. Matas, “All you need is a good init,” 2015, [online] Available: https://arxiv.org/abs/1511.06422.
[31] H. Zhang, Y. N. Dauphin and T. Ma, “Fixup initialization: Residual learning without normalization,” arXiv:1901.09321, 2019.
[32] L. Huang, S. Fayong and Y. Guo, “Structured perceptron with inexact search,” Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics, pp. 142-151, 2012.
[33] M. Oquab, L. Bottou, I. Laptev and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1717-1724, Jun. 2014.
[34] J. Yosinski, J. Clune, Y. Bengio and H. Lipson, “How transferable are features in deep neural networks?,” Proc. 27th Int. Conf. Neural Inf. Process. Syst., pp. 3320-3328, 2014.
[35] L. Duong, T. Cohn, S. Bird and P. Cook, “Low resource dependency parsing: Cross-lingual parameter sharing in a neural network parser,” Proc. 53rd Annu. Meeting Assoc. Comput. Linguist. 7th Int. Joint Conf. Nat. Lang. Process. Short Papers, vol. 2, pp. 845-850, Jul. 2015.
[36] M. Long, H. Zhu, J. Wang and M. I. Jordan, “Unsupervised domain adaptation with residual transfer networks,” Proc. 30th Annu. Conf. Neural Inf. Process. Syst., pp. 136-144, Dec. 2016.
[37] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” Proc. Int. Conf. Learn. Representations, 2015.
[38] S. Ren, K. He, R. Girshick and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137-1149, Jun. 2017.
[39] K. He, X. Zhang, S. Ren and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 770-778, 2016.
[40] M. Simon, E. Rodner and J. Denzler, “ImageNet pre-trained models with batch normalization,” CoRR, 2016, [online] Available: http://arix.org/abs/1612.01452.
[41] K. He, G. Gkioxari, P. Dollár and R. Girshick, “Mask R-CNN,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 386-397, Feb. 2020.
[42] V. Iglovikov and A. Shvets, “Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation,” 2018, http://arxiv.org/abs/1801.05746.
[43] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” Proc. Eur. Conf. Comput. Vis., pp. 818-833, 2014.
[44] G. Li, N. Duan, Y. Fang, M. Gong, D. Jiang and M. Zhou, “Unicoder-VL: A universal encoder for vision and language by cross-modal pre-training,” arXiv:1908.06066, 2019.
[45] R. Socher et al., “Recursive deep models for semantic compositionality over a sentiment treebank,” Proc. Conf. Empir. Methods Natural Lang. Process., pp. 1631-1642, 2013.
[46] D. Dong, H. Wu, W. He, D. Yu and H. Wang, “Multi-task learning for multiple language translation,” Proc. 53rd Annu. Meeting Assoc. Comput. Linguistics, pp. 1723-1732, 2015.
[47] M. Luong, Q. V. Le, I. Sutskever, O. Vinyals and L. Kaiser, “Multi-task sequence to sequence learning,” Proc. Int. Conf. Learn. Representations, 2016.
[48] Z. Yang, R. Salakhutdinov and W. W. Cohen, “Transfer learning for sequence tagging with hierarchical recurrent networks,” Proc. Int. Conf. Learn. Representations, 2017.
[49] Y. Lin, S. Yang, V. Stoyanov and H. Ji, “A multi-lingual multi-task architecture for low-resource sequence labeling,” Proc. 56th Annu. Meeting Assoc. Comput. Linguistics, pp. 799-809, 2018.
[50] L. Liu et al., “Empower sequence labeling with task-aware neural language model,” Proc. AAAI Conf. Artif. Intell., pp. 5253-5260, 2017.
[51] M. Johnson et al., “Google’s multilingual neural machine translation system: Enabling zero-shot translation,” 2016, arXiv:1611.04558. [Online]. Available: http://arxiv.org/abs/1611.04558
[52] B. Zhang, P. Williams, I. Titov and R. Sennrich, “Improving massively multilingual neural machine translation and zero-shot translation,” Proc. Annu. Meeting Assoc. Comput. Linguistics, pp. 1628-1639, 2020.
[53] X. Tan, Y. Ren, D. He, T. Qin, Z. Zhao and T.-Y. Liu, “Multilingual neural machine translation with knowledge distillation,” Proc. 7th Int. Conf. Learn. Representations, 2019.
[54] E. Belilovsky, M. Eickenberg and E. Oyallon, “Greedy layerwise learning can scale to imagenet,” Proc. Int. Conf. Mach. Learn., 2019.
[55] Z. Lin, X. Pan, M. Wang, X. Qiu, J. Feng, H. Zhou, and L. Li, “Pre-training multilingual neural machine translation by leveraging alignment information,” in EMNLP 2020, 2020.
[56] Ofir Press and Lior Wolf, “Using the output embedding to improve language models,” Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, pp. 157-163, 2017.
[57] Y. Kim, Y. Jernite, D. Sontag and A. M. Rush, “Character-aware neural language models,” Proc. AAAI, pp. 2741-2749, 2016.
[58] M. R. Costa-Juss`a and J. A. R. Fonollosa, “Character-based neural machine translation,” CoRR, vol. abs/1603.00810, 2016.
[59] R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer and Y. Wu, “Exploring the limits of language modeling,” arXiv:1602.02410, Feb. 2016, [online] Available: http://arxiv.org/abs/1602.02410.
[60] D. P. Bertsekas and J. N. Tsitsiklis, “Neuro-dynamic programming: An overview,” in Proc. 34th IEEE Conf. Decis. Control, New Orleans, LA, USA, Dec. 1995, pp. 560–564.
[61] H. D. Beale, H. B. Demuth, and M. Hagan, Neural Network Design. Boston, MA, USA: PWS Publishing, 1996.
[62] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
[63] D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016
[64] W. K. Hastings, “Monte Carlo sampling methods using Markov chains and their applications,” Biometrika, vol. 57, no. 1, pp. 97–109, Apr. 1970, doi: 10.1093/biomet/57.1.97
[65] R. Grandl, G. Ananthanarayanan, S. Kandula, S. Rao and A. Akella, “Multi-resource packing for cluster schedulers,” ACM SIGCOMM Comput. Commun. Rev., vol. 44, no. 4, pp. 455-466, 2015.
[66] H. Mao, M. Alizadeh, I. Menache, and S. Kandula, “Resource management with deep reinforcement learning,” in Proc. ACM Workshop Hot Topics Netw., 2016, pp. 50–56. |