博碩士論文 111522148 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:65 、訪客IP:3.16.207.192
姓名 吳宗勲(Tsung-Hsun Wu)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於生成對抗網路技術之惡意網路流量生成模型
(MCM-GAN: Malicious Cyber Maker using Generative Adversarial Networks)
相關論文
★ 無線行動隨意網路上穩定品質服務路由機制之研究★ 應用多重移動式代理人之網路管理系統
★ 應用移動式代理人之網路協同防衛系統★ 鏈路狀態資訊不確定下QoS路由之研究
★ 以訊務觀察法改善光突發交換技術之路徑建立效能★ 感測網路與競局理論應用於舒適性空調之研究
★ 以搜尋樹為基礎之無線感測網路繞徑演算法★ 基於無線感測網路之行動裝置輕型定位系統
★ 多媒體導覽玩具車★ 以Smart Floor為基礎之導覽玩具車
★ 行動社群網路服務管理系統-應用於發展遲緩兒家庭★ 具位置感知之穿戴式行動廣告系統
★ 調適性車載廣播★ 車載網路上具預警能力之車輛碰撞避免機制
★ 應用於無線車載網路上之合作式交通資訊傳播機制以改善車輛擁塞★ 智慧都市中應用車載網路以改善壅塞之調適性虛擬交通號誌
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 隨著網路的迅速發展,惡意攻擊事件日益增多,為了應對這種情況,使用機器學習 (Machine Learning) 或深度學習 (Deep Learning) 在入侵檢測系統 (In-trusion Detection System, IDS) 上來偵測惡意流量已成為主要趨勢。然而,由於許多資料集是公開的,並且公開資料集往往存在資料不平衡的問題,常見的解決方案是透過合成少數過採樣技術 (Synthetic Minority Over-sampling Technique, SMOTE) ,但這種方法所生成的流量缺乏真實性,如何有效地生成多樣且符合真實狀況的流量是為一大挑戰。
本論文提出了一種名為 "Malicious Cyber Maker GAN (MCM-GAN) " 的架構,旨在解決訓練不穩定性和模式崩潰 (Mode collapse) 問題,確保生成樣本多樣性。該架構採用了Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP) 設計,通過引入梯度懲罰來改進原有的權重剪切方法,使生成過程更加穩定。此外,本論文還引入了條件生成對抗網路 (Conditional Generative Adversarial Network, CGAN) 和雙時間尺度更新規則 (Two Time-Scale Update Rule, TTUR) ,以進一步提高生成效果和訓練效率。在UNSW-NB15資料集上的實驗結果顯示,使用MCM-GAN生成9種網路惡意攻擊類型時,訓練時間相比使用LSTM作為生成器和鑑別器減少了21.99%,模型大小則減少了27.62%。使用原始資料結合MCM-GAN生成資料進行訓練的XGBoost、Random Forest和SVC模型,其預測F1-Score分別達到89.07%、87.04%和84.31%。相較於SVM-SMOTE、GRU-CGAN和LSTM-CGAN等生成技術,MCM-GAN能達到更高的F1-Score。此外,相較於僅使用原始資料訓練的模型,這些模型的平均F1-Score分別提升了6.83%、6.48%和9.91%。
摘要(英) With the rapid development of the Internet, malicious attacks are becoming in-creasingly common. To address this issue, the use of machine learning or deep learn-ing technologies in intrusion detection systems (IDS) to detect malicious traffic has become a major trend. However, since many datasets are public and often have data imbalances, a common solution is to use the Synthetic Minority Over-sampling Tech-nique (SMOTE). However, the traffic generated by this method lacks authenticity. Effectively generating diverse and realistic traffic is a significant challenge.
This paper presents an architecture called "Malicious Cyber Maker GAN (MCM-GAN)", which aims to address training instability and mode collapse issues by ensuring diversity in the generated samples. This framework adopts the Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP) design, which improves on the original weight clipping method by incorporating a gradient penalty to stabilize the generation process. In addition, the work incorporates the Conditional Generative Adversarial Network (CGAN) and the Two Time-Scale Update Rule (TTUR) to further improve generation effectiveness and training efficiency. Experimental results on the UNSW-NB15 dataset show that when generating nine types of malicious network attacks using MCM-GAN, the training time was reduced by 21.99% compared to using LSTM as the generator and discriminator, and the model size was reduced by 27.62%. Models trained on original data combined with MCM-GAN generated data, namely XGBoost, Random Forest and SVC, achieved F1 scores of 89.07%, 87.04% and 84.31% respectively. These accuracy levels are higher compared to generation techniques such as SVM-SMOTE, GRU-CGAN and LSTM-CGAN. In addition, these models showed an average F1 score improvement of 6.83%, 6.48% and 9.91%, respectively, compared to models trained only on original data.
關鍵字(中) ★ 條件式生成對抗網路
★ 資料不平衡
★ 惡意網路流量分類
★ 入侵檢測系統
關鍵字(英) ★ Conditional Generative Adversarial Network
★ Traffic Classification
★ Malicious Network Traffic Classification
★ Intrusion Detection System
論文目次 摘要 i
Abstract iii
誌謝 v
目錄 vi
圖目錄 ix
表目錄 xi
第一章 緒論 1
1.1. 概要 1
1.2. 研究動機 2
1.3. 研究目的 3
1.4. 章節架構 4
第二章 背景知識與相關研究 5
2.1. 生成對抗網路 5
2.2. 閘門循環單元 6
2.3. 條件式生成 8
2.4. 雙時間尺度更新規則 9
2.4. 入侵檢測系統 10
2.5. 相關研究 12
第三章 研究方法 15
3.1. 系統架構與設計 15
3.2. 系統運作流程與實作 17
3.2.1. 資料前處理 17
3.2.2. 惡意網路流量生成模型 20
3.2.3. IDS-Verification 26
3.3. 系統環境 32
第四章 實驗與討論 34
4.1. 情境一:驗證使用GRU作為生成器與鑑別器於生成網路惡意流量的成效 34
4.1.1. 實驗一:GRU-GAN 與支援向量機結合合成少數過採樣技術資料分佈主成分分析之比較 36
4.1.2. 實驗二:GRU-GAN之二元交叉熵損失變化 38
4.2. 情境二:分析MCM-GAN效能並與其他模型之比較 39
4.2.1. 實驗三:MCM-GAN與不同模型生成的分布之比較 41
4.2.2. 實驗四:MCM-GAN與不同模型之參數使用量及模型大小之比較 43
4.2.3. 實驗五:MCM-GAN 與其他模型之平均訓練時間比較 45
4.3. 情境三:MCM-GAN於IDS- Verification之有效性驗證 46
4.3.1. 實驗六:比較MCM-GAN生成資料提升IDS之F1-Score成效 47
4.3.2. 實驗七:比較不同的生成技術對於IDS優化之成效 52
4.3.3. 實驗八:比較使用不同優化器於MCM-GAN之生成效果 53
第五章 結論與未來研究方向 55
5.1.1. 結論 55
5.1.2. 研究限制 55
5.1.3. 未來研究 56
參考文獻 58
參考文獻 [1]I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, "Generative Adversarial Networks," Communications of the ACM, vol. 63, no. 11, pp. 139–144, Oct. 2020, doi: https://doi.org/10.1145/3422622.
[2]M. Westerlund, "The Emergence of Deepfake Technology: A Review," Technology Innovation Management Review, vol. 9, no. 11, pp. 39–52, Jan. 2019, doi: https://doi.org/10.22215/timreview/1282.
[3]K. Jungil, K. Jaehyeon, and B. Jaekyoung, "HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis," arXiv.org, Oct. 2020, Available: https://arxiv.org/abs/2010.05646v2.
[4]Sai Rajeswar, S. Subramanian, F. Dutil, C. Pal, and A. Courville, "Adversarial Generation of Natural Language," arXiv (Cornell University), May 2017, doi: https://doi.org/10.48550/arxiv.1705.10929.
[5]C. Bowles, L. Chen, R. Guerrero, P. Bentley, A. Hammers, D. Alexander Dickie, M. Valdés Hernández, J. Wardlaw, D. Rueckert, "GAN Augmentation: Augmenting Training Data using Generative Adversarial Networks," arXiv (Cornell University), Jan. 2018, doi: https://doi.org/10.48550/arxiv.1810.10863.
[6]A. Khraisat, I. Gondal, P. Vamplew, and J. Kamruzzaman, "Survey of intrusion detection systems: techniques, datasets and challenges," Cybersecurity, vol. 2, no. 1, pp. 1–22, Jul. 2019, doi: https://doi.org/10.1186/s42400-019-0038-7..
[7]T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning. New York, NY: Springer New York, 2009. doi: https://doi.org/10.1007/978-0-387-84858-7.
[8]A. Fernandez, S. Garcia, F. Herrera, and N. V. Chawla, "SMOTE for Learning from Imbalanced Data: Progress and Challenges, Marking the 15-year Anniversary," Journal of Artificial Intelligence Research, vol. 61, pp. 863–905, Apr. 2018, doi: https://doi.org/10.1613/jair.1.11192.
[9]Z. Zhang, M. Li, and J. Yu, "On the convergence and mode collapse of GAN," Dec. 2018, doi: https://doi.org/10.1145/3283254.3283282.
[10]B. Neyshabur, S. Bhojanapalli, and A. Chakrabarti, "Stabilizing GAN Training with Multiple Random Projections," arXiv.org, Jun. 22, 2018. https://arxiv.org/abs/1705.07831 (accessed Jun. 12, 2024).
[11]J. Su, "GAN-QP: A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint," arXiv.org, Dec. 15, 2018. https://arxiv.org/abs/1811.07296 (accessed Jun. 12, 2024).
[12]S. Tao and J. Wang, "Alleviation of Gradient Exploding in GANs: Fake Can Be Real," arXiv (Cornell University), Jan. 2019, doi: https://doi.org/10.48550/arxiv.1912.12485.
[13]Y. Qin, N. Mitra, and P. Wonka, "How Does Lipschitz Regularization Influence GAN Training?," Lecture notes in computer science, pp. 310–326, Jan. 2020, doi: https://doi.org/10.1007/978-3-030-58517-4_19.
[14]R. Dey and F. M. Salem, "Gate-variants of Gated Recurrent Unit (GRU) neural networks," 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Aug. 2017, doi: https://doi.org/10.1109/mwscas.2017.8053243.
[15]Ishaan Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, "Improved Training of Wasserstein GANs," Mar. 2017, doi: https://doi.org/10.48550/arxiv.1704.00028.
[16]Y. Wang, P. Bilinski, Francois Bremond, and Antitza Dantcheva, "ImaGINator: Conditional Spatio-Temporal GAN for Video Generation," HAL (Le Centre pour la Communication Scientifique Directe), Mar. 2020, doi: https://doi.org/10.1109/wacv45572.2020.9093492.
[17]M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium," Jun. 2017, doi: https://doi.org/10.48550/arxiv.1706.08500.
[18]I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, "Improved Training of Wasserstein GANs," arXiv:1704.00028 [cs, stat], Dec. 2017, Available: https://arxiv.org/abs/1704.00028v3.
[19]D. Baby and S. Verhulst, "Sergan: Speech Enhancement Using Relativistic Generative Adversarial Networks with Gradient Penalty," May 2019, doi: https://doi.org/10.1109/icassp.2019.8683799.
[20]A. Voynov and A. Babenko, "Unsupervised Discovery of Interpretable Directions in the GAN Latent Space," arXiv.org, Jun. 24, 2020. https://arxiv.org/abs/2002.03754v3 (accessed Jun. 12, 2024).
[21]Y.-P. Hsieh, C. Liu, and V. Cevher, "Finding Mixed Nash Equilibria of Generative Adversarial Networks," arXiv (Cornell University), Jan. 2018, doi: https://doi.org/10.48550/arxiv.1811.02002.
[22]H. Salehinejad, S. Sankar, J. Barfett, E. Colak, and S. Valaee, "Recent Advances in Recurrent Neural Networks," arXiv.org, Feb. 22, 2018. https://arxiv.org/abs/1801.01078v3.
[23]G. Van Houdt, C. Mosquera, and G. Nápoles, "A review on the long short-term memory model," Artificial Intelligence Review, vol. 53, no. 8, May 2020, doi: https://doi.org/10.1007/s10462-020-09838-1.
[24]O. Adigun and B. Kosko, "Training Generative Adversarial Networks with Bidirectional Backpropagation," Dec. 2018, doi: https://doi.org/10.1109/icmla.2018.00190.
[25]L. Yang and A. Shami, "On hyperparameter optimization of machine learning algorithms: Theory and practice," Neurocomputing, vol. 415, pp. 295–316, Nov. 2020, doi: https://doi.org/10.1016/j.neucom.2020.07.061.
[26]P. Philip and S. Minhas, "A Brief Survey on Natural Language Processing Based Text Generation and Evaluation Techniques," VFAST Transactions on Software Engineering, vol. 10, no. 3, pp. 24–36, Sep. 2022, doi: https://doi.org/10.21015/vtse.v10i3.1104.
[27]P. Ramachandran, B. Zoph, and Q. V. Le, "Searching for Activation Functions," arxiv.org, Oct. 2017, doi: https://doi.org/10.48550/arXiv.1710.05941.
[28]E. G. Gladyshev, "On Stochastic Approximation," Theory of probability and its applications, vol. 10, no. 2, pp. 275–278, Jan. 1965, doi: https://doi.org/10.1137/1110031.
[29]S. Jose, D. Malathi, B. Reddy, and D. Jayaseeli, "A Survey on Anomaly Based Host Intrusion Detection System," Journal of Physics: Conference Series, vol. 1000, p. 012049, Apr. 2018, doi: https://doi.org/10.1088/1742-6596/1000/1/012049.
[30]A. Javaid, Q. Niyaz, W. Sun, and M. Alam, "A Deep Learning Approach for Network Intrusion Detection System," Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS), 2016, doi: https://doi.org/10.4108/eai.3-12-2015.2262516.
[31]Jalal Ghadermazi, A. Shah, and N. D. Bastian, "Towards Real-time Network Intrusion Detection with Image-based Sequential Packets Representation," IEEE transactions on big data, pp. 1–17, Jan. 2024, doi: https://doi.org/10.1109/tbdata.2024.3403394.
[32]M. F. Umer, M. Sher, and Y. Bi, "Flow-based intrusion detection: Techniques and challenges," Computers & Security, vol. 70, pp. 238–254, Sep. 2017, doi: https://doi.org/10.1016/j.cose.2017.05.009.
[33]N. Hoque, D. K. Bhattacharyya, and J. K. Kalita, "Botnet in DDoS Attacks: Trends and Challenges," IEEE Communications Surveys & Tutorials, vol. 17, no. 4, pp. 2242–2270, 2015, doi: https://doi.org/10.1109/comst.2015.2457491.
[34]Z. Chen, Q. Yan, H. Han, S. Wang, L. Peng, L. Wang, B. Yang, "Machine learning based mobile malware detection using highly imbalanced network traffic," Information Sciences, vol. 433–434, pp. 346–364, Apr. 2018, doi: https://doi.org/10.1016/j.ins.2017.04.044.
[35]S. Suthaharan, Machine Learning Models and Algorithms for Big Data Classification. Boston, MA: Springer US, 2016. doi: https://doi.org/10.1007/978-1-4899-7641-3.
[36]L. Peng, H. Zhang, Y. Chen, and B. Yang, "Imbalanced traffic identification using an imbalanced data gravitation-based classification model," Computer Communications, vol. 102, pp. 177–189, Apr. 2017, doi: https://doi.org/10.1016/j.comcom.2016.05.010.
[37]A. Cheng, "PAC-GAN: Packet Generation of Network Traffic using Generative Adversarial Networks," Oct. 2019, doi: https://doi.org/10.1109/iemcon.2019.8936224.
[38]Y. Li, D. Liu, H. Li, F. Wu, H. Zhang, H. Yang, "Convolutional Neural Network-Based Block Up-Sampling for Intra Frame Coding," IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 9, pp. 2316–2330, Sep. 2018, doi: https://doi.org/10.1109/TCSVT.2017.2727682.
[39]J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid, "Convolutional Kernel Networks," arXiv.org, Nov. 14, 2014. https://arxiv.org/abs/1406.3332 (accessed Jun. 12, 2024).
[40]T. Kim and W. Pak, "Early Detection of Network Intrusions Using a GAN-Based One-Class Classifier," IEEE Access, vol. 10, pp. 119357–119367, 2022, doi: https://doi.org/10.1109/access.2022.3221400.
[41]Z. Liu and X. Yin, "LSTM-CGAN: Towards Generating Low-Rate DDoS Adversarial Samples for Blockchain-Based Wireless Network Detection Models," IEEE Access, vol. 9, pp. 22616–22625, Feb. 2021, doi: https://doi.org/10.1109/access.2021.3056482.
[42]M. Siracusano, Stavros Shiaeles, and Bogdan Ghita, "Detection of LDDoS Attacks Based on TCP Connection Parameters," arXiv (Cornell University), Oct. 2018, doi: https://doi.org/10.1109/giis.2018.8635701.
[43]O. Olayemi Petinrin, F. Saeed, X. Li, F. Ghabban, and K.-C. Wong, "Malicious Traffic Detection in IoT and Local Networks Using Stacked Ensemble Classifier," Computers, Materials & Continua, vol. 71, no. 1, pp. 489–515, 2022, doi: https://doi.org/10.32604/cmc.2022.019636.
[44]T. Chen and C. Guestrin, "XGBoost: a Scalable Tree Boosting System," Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’16, pp. 785–794, 2016, doi: https://doi.org/10.1145/2939672.2939785.
[45]M. Belgiu and L. Drăguţ, "Random forest in remote sensing: A review of applications and future directions," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 114, no. 114, pp. 24–31, Apr. 2016, doi: https://doi.org/10.1016/j.isprsjprs.2016.01.011.
[46]J. Cervantes, F. Garcia-Lamont, L. Rodríguez-Mazahua, and A. Lopez, "A comprehensive survey on support vector machine classification: Applications, challenges and trends," Neurocomputing, vol. 408, no. 1, pp. 189–215, Sep. 2020, doi: https://doi.org/10.1016/j.neucom.2019.10.118.
[47]X. Ying, "An Overview of Overfitting and its Solutions," Journal of Physics: Conference Series, vol. 1168, no. 2, p. 022022, Feb. 2019, doi: https://doi.org/10.1088/1742-6596/1168/2/022022.
[48]H. Abdi and L. J. Williams, "Principal component analysis," Wiley Interdisciplinary Reviews: Computational Statistics, vol. 2, no. 4, pp. 433–459, Jun. 2010, doi: https://doi.org/10.1002/wics.101.
[49]N. Moustafa and J. Slay, "UNSW-NB15: a comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set)," IEEE Xplore, Nov. 01, 2015. https://ieeexplore.ieee.org/document/7348942.
[50]P. Goyal and A. Goyal, "Comparative study of two most popular packet sniffing tools-Tcpdump and Wireshark," IEEE Xplore, Sep. 01, 2017. https://ieeexplore.ieee.org/abstract/document/8319360.
[51]H. Hendrawan, P. Sukarno, and M. A. Nugroho, "Quality of Service (QoS) Comparison Analysis of Snort IDS and Bro IDS Application in Software Define Network (SDN) Architecture," IEEE Xplore, Jul. 01, 2019. https://ieeexplore.ieee.org/abstract/document/8835211 (accessed Aug. 14, 2021).
[52]E. Massart, "Improving weight clipping in Wasserstein GANs," 2022 26th International Conference on Pattern Recognition (ICPR), Aug. 2022, doi: https://doi.org/10.1109/icpr56361.2022.9956056.
[53]Artem Obukhov and Mikhail Krasnyanskiy, "Quality Assessment Method for GAN Based on Modified Metrics Inception Score and Fréchet Inception Distance," Advances in intelligent systems and computing, pp. 102–114, Jan. 2020, doi: https://doi.org/10.1007/978-3-030-63322-6_8.
[54]M. Soloveitchik, T. Diskin, E. Morin, and A. Wiesel, "Conditional Frechet Inception Distance," arXiv (Cornell University), Jan. 2021, doi: https://doi.org/10.48550/arxiv.2103.11521..
[55]P. Bühlmann and B. Yu, "Analyzing bagging," The Annals of Statistics, vol. 30, no. 4, Aug. 2002, doi: https://doi.org/10.1214/aos/1031689014.
[56]H. Binder, O. Gefeller, M. Schmid, and A. Mayr, "The Evolution of Boosting Algorithms," Methods of Information in Medicine, vol. 53, no. 06, pp. 419–427, 2014, doi: https://doi.org/10.3414/me13-01-0122.
[57]A. Patle and D. S. Chouhan, "SVM kernel functions for classification," IEEE Xplore, Jan. 01, 2013. https://ieeexplore.ieee.org/abstract/document/6524743 (accessed Dec. 07, 2021).
[58]S. Ding, X. Hua, and J. Yu, "An overview on nonparallel hyperplane support vector machine algorithms," Neural Computing and Applications, vol. 25, no. 5, pp. 975–982, Dec. 2013, doi: https://doi.org/10.1007/s00521-013-1524-6.
[59]X. Dong, Z. Yu, W. Cao, Y. Shi, and Q. Ma, "A survey on ensemble learning," Frontiers of Computer Science, vol. 14, no. 2, pp. 241–258, Aug. 2019, doi: https://doi.org/10.1007/s11704-019-8208-z.
[60]B. de Ville, "Decision trees," Wiley Interdisciplinary Reviews: Computational Statistics, vol. 5, no. 6, pp. 448–455, Oct. 2013, doi: https://doi.org/10.1002/wics.1278.
[61]A. Kulesa, M. Krzywinski, P. Blainey, and N. Altman, "Sampling distributions and the bootstrap," Nature Methods, vol. 12, no. 6, pp. 477–478, May 2015, doi: https://doi.org/10.1038/nmeth.3414.
[62]N. Islam, F. Farhin, I. Sultana, M. Kaiser, M. Rahman, M. Mahmud, "Towards Machine Learning Based Intrusion Detection in IoT Networks," Computers, Materials & Continua, vol. 69, no. 2, pp. 1801–1821, 2021, doi: https://doi.org/10.32604/cmc.2021.018466.
[63]Tama, B. A., Comuzzi, M., & Rhee, K. H. (2019). "TSE-IDS: A two-stage classifier ensemble for intelligent anomaly-based intrusion detection system." IEEE access, 7, 94497-94507.
[64]D. Singh and B. Singh, "Investigating the impact of data normalization on classification performance," Applied Soft Computing, vol. 97, p. 105524, May 2019, doi: https://doi.org/10.1016/j.asoc.2019.105524.
[65]"pandas: powerful Python data analysis toolkit Release 1.4.4 Wes McKinney and the Pandas Development Team," 2022. Available: https://pandas.pydata.org/pandas-docs/version/1.4/pandas.pdf.
[66]C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. V. Kerkwijk, M. Brett, A. Haldane, J. Fernández del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, T. E. Oliphant "Array Programming with NumPy," Nature, vol. 585, no. 7825, pp. 357–362, Sep. 2020, doi: https://doi.org/10.1038/s41586-020-2649-2.
[67]A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, "PyTorch: An Imperative Style, High-Performance Deep Learning Library," arXiv (Cornell University), Dec. 2019, doi: https://doi.org/10.48550/arxiv.1912.01703.
[68]F. Klinker, "Exponential moving average versus moving exponential average," Mathematische Semesterberichte, vol. 58, no. 1, pp. 97–107, Dec. 2010, doi: https://doi.org/10.1007/s00591-010-0080-8.
[69]N. Ketkar, Deep Learning with Python. Berkeley, CA: Apress, 2017. doi: https://doi.org/10.1007/978-1-4842-2766-4.
[70]A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht, "The Marginal Value of Adaptive Gradient Methods in Machine Learning," arXiv (Cornell University), Jan. 2017, doi: https://doi.org/10.48550/arxiv.1705.08292.
[71]J. W. Pitera, "Expected Distributions of Root-Mean-Square Positional Deviations in Proteins," Journal of Physical Chemistry B, vol. 118, no. 24, pp. 6526–6530, Mar. 2014, doi: https://doi.org/10.1021/jp412776d.
[72]H. Zhao, J. An, M. Yu, D. Lv, K. Kuang, and T. Zhang, "Nesterov-accelerated adaptive momentum estimation-based wavefront distortion correction algorithm," Applied Optics, vol. 60, no. 24, p. 7177, Aug. 2021, doi: https://doi.org/10.1364/ao.428465.
[73]Roy Nuary Singarimbun, Erna Budhiarti Nababan, and Opim Salim Sitompul, "Adaptive Moment Estimation To Minimize Square Error In Backpropagation Algorithm," Nov. 2019, doi: https://doi.org/10.1109/icosnikom48755.2019.9111563.
[74]Y.-F. Jiang, S. Chang, and Z. Wang, "TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up," Feb. 2021, doi: https://doi.org/10.48550/arxiv.2102.07074.
[75]F.-A. Croitoru, V. Hondru, R. T. Ionescu, and M. Shah, "Diffusion Models in Vision: A Survey," IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–20, 2023, doi: https://doi.org/10.1109/TPAMI.2023.3261988.
[76]G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, T. Liu, "LightGBM: A Highly Efficient Gradient Boosting Decision Tree," undefined, 2017. https://www.semanticscholar.org/paper/LightGBM%3A-A-Highly-Efficient-Gradient-Boosting-Tree-Ke-Meng/497e4b08279d69513e4d2313a7fd9a55dfb73273.
[77]Liudmila Ostroumova Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, and Andrey Gulin, "CatBoost: unbiased boosting with categorical features," arXiv (Cornell University), Jun. 2017, doi: https://doi.org/10.48550/arxiv.1706.09516.
指導教授 周立德(Li-Der Chou) 審核日期 2024-8-9
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明