博碩士論文 104582006 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:52 、訪客IP:18.221.11.166
姓名 劉建昌(Chien-Chang Liu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 深度學習應用於5G/B5G網路切片管理之研究
(The Study of Deep Learning on 5G/B5G Network Slicing Management)
相關論文
★ 無線行動隨意網路上穩定品質服務路由機制之研究★ 應用多重移動式代理人之網路管理系統
★ 應用移動式代理人之網路協同防衛系統★ 鏈路狀態資訊不確定下QoS路由之研究
★ 以訊務觀察法改善光突發交換技術之路徑建立效能★ 感測網路與競局理論應用於舒適性空調之研究
★ 以搜尋樹為基礎之無線感測網路繞徑演算法★ 基於無線感測網路之行動裝置輕型定位系統
★ 多媒體導覽玩具車★ 以Smart Floor為基礎之導覽玩具車
★ 行動社群網路服務管理系統-應用於發展遲緩兒家庭★ 具位置感知之穿戴式行動廣告系統
★ 調適性車載廣播★ 車載網路上具預警能力之車輛碰撞避免機制
★ 應用於無線車載網路上之合作式交通資訊傳播機制以改善車輛擁塞★ 智慧都市中應用車載網路以改善壅塞之調適性虛擬交通號誌
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 在5G/B5G網路中使用切片方式進行不同服務的支援是5G/B5G網路的關鍵技術。已有許多具有彈性的控制方法應用在允入控制及資源管理等問題上,這些控制方法的共同特性是它們均使用目前網路環境的參數,例如網路最大及最小的傳輸速率以及可用資源。
然而,由於網路切片使用虛擬技術共享實體資源,使得網路切片之資源管理益趨複雜,因此,如何隨著資源的使用狀況進行切片的管理是非常困難的。為了解決這個問題,在本論文中我們提出了一深度強化學習演算法及深度神經網路的管理系統,用以預估未來時間內資源的使用狀況,並經由深度神經網路策略選取適合的切片請求,藉由與環境的互動達到獎勵的最大化。在切片請求之管理策略上,我們提出分階段的切片管理策略,先使用監督學習來進行參數學習,再透過與環境互動的強化學習更新參數,最後進行切片請求的管理。透過不同階段的資料收集進行再訓練,持續更新策略網路參數,獲得更好的控制結果。除此之外,本論文採用了兩種綠色學習方法,使用預訓練以及減少模型參數的數量來加快模型訓練達到更好的效能。
摘要(英) Using slicing to support different services in 5G/B5G networks is a key technology. Many flexible control methods have been applied to issues such as admission control and resource management. The common feature of these control methods is that they all use parameters of the current network environment, such as the maximum and minimum transmission rates of the network and available resources.
However, since network slicing uses virtual technology to share physical resources, resource management of network slicing becomes increasingly complex. Therefore, it is very difficult to manage slices according to resource usage. In order to solve this problem, in this paper we propose a deep reinforcement learning algorithm and a deep neural network management system to predict the usage of resources in the future. Network slice requests are accpted through deep neural network strategies, and rewards are maximized through interaction with the environment. We propose a staged slice management strategy, which first uses supervised learning for parameter learning, then updates parameters through reinforcement learning that interacts with the environment, and finally manages slice requests. Retraining is performed through data collection at different stages, and policy network parameters are continuously updated to obtain better control results. In addition, this paper adopts two green learning methods, using pre-training and reducing the number of model parameters to speed up model training and achieve better performance.
關鍵字(中) ★ 第五代行動通訊網路
★ 網路切片
★ 強化學習
關鍵字(英) ★ 5G
★ network slicing
★ reinforcement learning
論文目次 摘要 i
Abstract ii
Content iii
List of Figures v
List of Tables xi
Abbreviations xv
Notations of Definitions xvii
Chapter 1 Introduction 1
1.1 5G/B5G Network Slicing 1
1.2 Network Slicing Management 4
1.3 Motivations and Goals 6
1.4 Thesis Organization 7
Chapter 2 Related Works 9
Chapter 3 Admission Control Policy Network Design Using Deep Reinforcement Learning 17
3.1 Admission Control Policy Network Architecture 17
3.2 Function description 19
3.2.1 Admission Control Policy Network 19
3.2.2 Reward Calculator 25
Chapter 4 Admission Control Policy Network Applied to Network Slicing Management 31
4.1 Network Slicing Management 31
4.2 Tetris, Shortest Job First and Random Algorithm 33
4.3 Simulation Results and Comparison 34
4.3.1 High Proportion of Short Slice Duration 39
4.3.2 Medium Proportion of Short Slice Duration 43
4.3.3 Low Proportion of Short Slice Duration 47
Chapter 5 Applying Green Learning to Slice Admission Control Policy Network 55
5.1 Pre-training 55
5.1.1 High Proportion of Short Slice Duration with Pre-training 57
5.1.2 Medium Proportion of Short Slice Duration with Pre-training 61
5.1.3 Low Proportion of Short Slice Duration with Pre-training 65
5.2 Reduce model parameters 74
5.2.1 Slot Size M set to 2 76
5.2.2 Slot Size M set to 8 81
5.2.3 Slot Size M set to 10 85
Chapter 6 Conclusions and Future Works 93
References 95
參考文獻 [1] S. Jacob et al., “A novel spectrum sharing scheme using dynamic long short-term memory with CP-OFDMA in 5G networks,” IEEE Trans. Cogn. Commun. Netw., vol. 6, no. 3, pp. 926–934, Sep. 2020. [Online]. Available: https://doi.org/10.1109/TCCN.2020.2970697
[2] NGMN Alliance 5G White Paper. [Online]. Available: https://www.ngmn.org/5g-white-paper/5g-white-paper.html
[3] 3GPP TS 28.530: ‘Management and orchestration; Concepts, use cases and requirements’
[4] 3GPP TS 23.501: ‘System Architecture for the 5G System’
[5] W.-K. Chen, Y.-F. Liu, Y.-H. Dai, and Z.-Q. Luo, “An efficient linear programming rounding-and-refinement algorithm for large-scale network slicing problem,” in Proc. 46th IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), Toronto, ON, Canada, Jun. 2021, pp. 4735–4739.
[6] X. Li, J. Rao, H. Zhang, and A. Callard, “Network slicing with elastic SFC,” in Proc. IEEE 86th Veh. Technol. Conf. (VTC-Fall), 2017, pp. 1–5.
[7] R. S. Sutton and A. G. Barto, “Reinforcement learning: An introduction,” Robotica, vol. 17, no. 2, pp. 229–235, 1999.
[8] M. Jiang, M. Condoluci, and T. Mahmoodi, “Network Slicing Management & Prioritization in 5G Mobile Systems,” Euro. Wireless 2016, Oulu, Finland, 2016, pp. 1–6.
[9] A. Perveen, M. Patwary, and A. Aneiba, “Dynamically reconfigurable slice allocation and admission control within 5G wireless networks,” in Proc. IEEE 89th Veh. Technol. Conf., 2019, pp. 1–7.
[10] A. Kammoun, N. Tabbane, G. Diaz, and N. Achir, “Admission control algorithm for network slicing management in SDN-NFV environment,” in Proc. 6th Int. Conf. Multimedia Comput. Syst. (ICMCS), 2018, pp. 1–6
[11] P. Caballero, A. Banchs, G. de Veciana, X. Costa-Pérez, and A. Azcorra, ‘‘Network slicing for guaranteed rate services: Admission control and resource allocation games,’’ IEEE Trans. Wireless Commun., vol. 17, no. 10, pp. 6419–6432, Oct. 2018
[12] H. M. Soliman and A. Leon-Garcia, “QoS-aware frequency-space network slicing and admission control for virtual wireless networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Washington, DC, USA, Dec. 2016, pp. 1–6.
[13] B. Han et al., “Admission and congestion control for 5G network slicing,” in Proc. IEEE Conf. Stand. Commun. Netw. (CSCN), 2018, pp. 1–6
[14] M. R. Raza, C. Natalino, P. Öhlen, L. Wosinska, and P. Monti, “A slice admission policy based on reinforcement learning for a 5G flexible RAN,” in Proc. Eur. Conf. Opt. Commun. (ECOC), 2018, pp. 1–3.
[15] C. Natalino, M. R. Raza, A. Rostami, P. Ohlen, L. Wosinka, and P. Monti, “Machine learning aided resource orchestration in multi-tenant networks,” in Proc. IEEE Photon. Summer Top. Meeting, Jul. 2018, doi: 10.1109/PHOSST.2018.8456735.
[16] H. Tong and T. X. Brown, “Adaptive call admission control under quality of service constraints: a reinforcement learning solution,” IEEE J. Sel. Areas Commun., vol. 18, no. 2, pp. 209–221, Feb. 2000.
[17] D. Bega et al., “A Machine Learning Approach to 5G Infrastructure Market Optimization,” IEEE Trans. Mobile Computing, vol. 19, no. 3, Feb. 2020, pp. 498–512.
[18] R. Li et al., “Deep reinforcement learning for resource management in network slicing,” IEEE Access, vol. 6, pp. 74429–74441, 2018.
[19] G. Sun, K. Xiong, G. O. Boateng, D. Ayepah-Mensah, G. Liu, and W. Jiang, “Autonomous resource provisioning and resource customization for mixed traffics in virtualized radio access network,” IEEE Syst. J., vol. 13, no. 3, pp. 2454–2465, Sep. 2019.
[20] A. Namdari and Z. S. Li, “An entropy-based approach for modeling lithium-ion battery capacity fade,” in Proc. Annu. Rel. Maintainability Symp. (RAMS), Jan. 2020, pp. 1–7, doi: 10.1109/RAMS48030.2020.9153698.
[21] J. Lee and M. Mitici, “Deep reinforcement learning for predictive aircraft maintenance using probabilistic remaining-useful-life prognostics,” Rel. Eng. Syst. Saf., vol. 230, Feb. 2023, Art. no. 108908.
[22] A. Namdari and Z. S. Li, “A multiscale entropy-based long short term memory model for lithium-ion battery prognostics,” in Proc. IEEE ICPHM, Detroit, MI, USA, Jun. 2021, pp. 1–6, doi: 10.1109/ICPHM51084.2021.9486674.
[23] A. Krizhevsky, I. Sutskever and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 25, pp. 1097-1105, 2012.
[24] S. Krishna Kumar, “On weight initialization in deep neural networks,” arXiv:1704.08863, 2017, [online] Available: http://arxiv.org/abs/1704.08863.
[25] A. M. Saxe, J. L. McClelland and S. Ganguli, “Exact solutions to the nonlinear dynamics of learning in deep linear neural networks,” Dec. 2013, [online] Available: https://arxiv.org/abs/1312.6120.
[26] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta and Y. Bengio, “FitNets: Hints for Thin Deep Nets,” Dec. 2014, [online] Available: https://arxiv.org/abs/1412.6550.
[27] B. Hanin and D. Rolnick, “How to start training: The effect of initialization and architecture,” Advances in Neural Information Processing Systems, vol. 31, pp. 571-581, 2018, [online] Available: https://proceedings.neurips.cc/paper/2018/file/d81f9c1be2e08964bf9f24b15f0e4900Paper.pdf.
[28] K. He, X. Zhang, S. Ren and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1026-1034, Dec. 2015.
[29] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proc. 13th Int. Conf. Artif. Intell. Statist., 2010, pp. 249–256.
[30] D. Mishkin and J. Matas, “All you need is a good init,” 2015, [online] Available: https://arxiv.org/abs/1511.06422.
[31] H. Zhang, Y. N. Dauphin and T. Ma, “Fixup initialization: Residual learning without normalization,” arXiv:1901.09321, 2019.
[32] L. Huang, S. Fayong and Y. Guo, “Structured perceptron with inexact search,” Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics, pp. 142-151, 2012.
[33] M. Oquab, L. Bottou, I. Laptev and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1717-1724, Jun. 2014.
[34] J. Yosinski, J. Clune, Y. Bengio and H. Lipson, “How transferable are features in deep neural networks?,” Proc. 27th Int. Conf. Neural Inf. Process. Syst., pp. 3320-3328, 2014.
[35] L. Duong, T. Cohn, S. Bird and P. Cook, “Low resource dependency parsing: Cross-lingual parameter sharing in a neural network parser,” Proc. 53rd Annu. Meeting Assoc. Comput. Linguist. 7th Int. Joint Conf. Nat. Lang. Process. Short Papers, vol. 2, pp. 845-850, Jul. 2015.
[36] M. Long, H. Zhu, J. Wang and M. I. Jordan, “Unsupervised domain adaptation with residual transfer networks,” Proc. 30th Annu. Conf. Neural Inf. Process. Syst., pp. 136-144, Dec. 2016.
[37] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” Proc. Int. Conf. Learn. Representations, 2015.
[38] S. Ren, K. He, R. Girshick and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137-1149, Jun. 2017.
[39] K. He, X. Zhang, S. Ren and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 770-778, 2016.
[40] M. Simon, E. Rodner and J. Denzler, “ImageNet pre-trained models with batch normalization,” CoRR, 2016, [online] Available: http://arix.org/abs/1612.01452.
[41] K. He, G. Gkioxari, P. Dollár and R. Girshick, “Mask R-CNN,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 386-397, Feb. 2020.
[42] V. Iglovikov and A. Shvets, “Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation,” 2018, http://arxiv.org/abs/1801.05746.
[43] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” Proc. Eur. Conf. Comput. Vis., pp. 818-833, 2014.
[44] G. Li, N. Duan, Y. Fang, M. Gong, D. Jiang and M. Zhou, “Unicoder-VL: A universal encoder for vision and language by cross-modal pre-training,” arXiv:1908.06066, 2019.
[45] R. Socher et al., “Recursive deep models for semantic compositionality over a sentiment treebank,” Proc. Conf. Empir. Methods Natural Lang. Process., pp. 1631-1642, 2013.
[46] D. Dong, H. Wu, W. He, D. Yu and H. Wang, “Multi-task learning for multiple language translation,” Proc. 53rd Annu. Meeting Assoc. Comput. Linguistics, pp. 1723-1732, 2015.
[47] M. Luong, Q. V. Le, I. Sutskever, O. Vinyals and L. Kaiser, “Multi-task sequence to sequence learning,” Proc. Int. Conf. Learn. Representations, 2016.
[48] Z. Yang, R. Salakhutdinov and W. W. Cohen, “Transfer learning for sequence tagging with hierarchical recurrent networks,” Proc. Int. Conf. Learn. Representations, 2017.
[49] Y. Lin, S. Yang, V. Stoyanov and H. Ji, “A multi-lingual multi-task architecture for low-resource sequence labeling,” Proc. 56th Annu. Meeting Assoc. Comput. Linguistics, pp. 799-809, 2018.
[50] L. Liu et al., “Empower sequence labeling with task-aware neural language model,” Proc. AAAI Conf. Artif. Intell., pp. 5253-5260, 2017.
[51] M. Johnson et al., “Google’s multilingual neural machine translation system: Enabling zero-shot translation,” 2016, arXiv:1611.04558. [Online]. Available: http://arxiv.org/abs/1611.04558
[52] B. Zhang, P. Williams, I. Titov and R. Sennrich, “Improving massively multilingual neural machine translation and zero-shot translation,” Proc. Annu. Meeting Assoc. Comput. Linguistics, pp. 1628-1639, 2020.
[53] X. Tan, Y. Ren, D. He, T. Qin, Z. Zhao and T.-Y. Liu, “Multilingual neural machine translation with knowledge distillation,” Proc. 7th Int. Conf. Learn. Representations, 2019.
[54] E. Belilovsky, M. Eickenberg and E. Oyallon, “Greedy layerwise learning can scale to imagenet,” Proc. Int. Conf. Mach. Learn., 2019.
[55] Z. Lin, X. Pan, M. Wang, X. Qiu, J. Feng, H. Zhou, and L. Li, “Pre-training multilingual neural machine translation by leveraging alignment information,” in EMNLP 2020, 2020.
[56] Ofir Press and Lior Wolf, “Using the output embedding to improve language models,” Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, pp. 157-163, 2017.
[57] Y. Kim, Y. Jernite, D. Sontag and A. M. Rush, “Character-aware neural language models,” Proc. AAAI, pp. 2741-2749, 2016.
[58] M. R. Costa-Juss`a and J. A. R. Fonollosa, “Character-based neural machine translation,” CoRR, vol. abs/1603.00810, 2016.
[59] R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer and Y. Wu, “Exploring the limits of language modeling,” arXiv:1602.02410, Feb. 2016, [online] Available: http://arxiv.org/abs/1602.02410.

[60] D. P. Bertsekas and J. N. Tsitsiklis, “Neuro-dynamic programming: An overview,” in Proc. 34th IEEE Conf. Decis. Control, New Orleans, LA, USA, Dec. 1995, pp. 560–564.
[61] H. D. Beale, H. B. Demuth, and M. Hagan, Neural Network Design. Boston, MA, USA: PWS Publishing, 1996.
[62] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
[63] D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016
[64] W. K. Hastings, “Monte Carlo sampling methods using Markov chains and their applications,” Biometrika, vol. 57, no. 1, pp. 97–109, Apr. 1970, doi: 10.1093/biomet/57.1.97
[65] R. Grandl, G. Ananthanarayanan, S. Kandula, S. Rao and A. Akella, “Multi-resource packing for cluster schedulers,” ACM SIGCOMM Comput. Commun. Rev., vol. 44, no. 4, pp. 455-466, 2015.
[66] H. Mao, M. Alizadeh, I. Menache, and S. Kandula, “Resource management with deep reinforcement learning,” in Proc. ACM Workshop Hot Topics Netw., 2016, pp. 50–56.
指導教授 周立德(Li-Der Chou) 審核日期 2024-1-25
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明