博碩士論文 106423025 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:237 、訪客IP:18.191.116.19
姓名 王伯倫(Po-Lun Wang)  查詢紙本館藏   畢業系所 資訊管理學系
論文名稱 高斯鯨群演算法於最佳化問題之研究
(A Study on Optimization Problem with Gaussian Distribution Based Whale Optimization Algorithm)
相關論文
★ 變數選擇在智慧型系統與應用之研究★ 智慧型系統之參數估測研究─一個新的DE方法
★ 合奏學習式智慧型系統在分類問題之研究★ 複數模糊類神經系統於多類別分類問題之研究
★ 融入後設認知策略的複數模糊認知圖於分類問題之研究★ 分類問題之研究-以複數型模糊類神經系統為方法
★ 智慧型差分自回歸移動平均模型於時間序列預測之研究★ 計算智慧及複數模糊集於適應性影像處理之研究
★ 智慧型模糊類神經計算模式使用複數模糊集合與ARIMA模型★ Empirical Study on IEEE 802.11 Wireless Signal – A Case Study at the NCU Campus
★ 自我建構式複數模糊ARIMA於指數波動預測之研究★ 資料前處理之研究:以基因演算法為例
★ 針對文字分類的支援向量導向樣本選取★ 智慧型區間預測之研究─以複數模糊類神經、支持向量迴歸、拔靴統計為方法
★ 複數模糊類神經網路在多目標財經預測★ 智慧型模糊類神經計算使用非對稱模糊類神經網路系統與球型複數模糊集
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來機器學習在能力上有顯著提升,為人工智慧系統,例如類神經網路,帶來更佳的效能表現;這意味著計算模型的參數數量大量增加。換言之,神經系統模型的階層深度增加以及各層神經元數量劇增,伴隨而來的問題是模型參數驟增使機器學習過程尋找最佳解的難度提高。因此具有搜尋高維度參數解的最佳化演算法之研究,越顯出其重要性。本研究提出一種改良式的演算法,稱為高斯分布鯨群最佳化演算法(Gaussian Distribution based Whale Optimization Algorithm, GD-WOA)。原來的鯨群最佳化演算法(WOA)雖然具有不錯的搜尋能力以及簡約的最佳化策略,但是經過實驗測試發現隨著最佳化問題的參數維度提高,其優化能力逐漸顯得不足。除此之外,WOA對於局部最佳解的處理能力以及最佳化問題的泛用性亦有不足之處。有鑑於此,本論文所提出之GD-WOA,係以兩種策略改良WOA,其一是在搜尋過程中表現最佳的鯨魚位置建立高斯隨機分布並由此分布產生一個新位置,使之成為鯨群趨近之標的。另一策略是使用隨機式擴大搜尋方式,亦即,GD-WOA演算法在整個搜尋最佳解的過程中保有一定的搜尋能力;特別是在搜尋過程遭遇局部最佳解的情況,透過此策略可以減輕優化停滯的風險。在本研究中我們使用38個無約束型函數與30個約束型函數檢驗GD-WOA搜尋最佳解之優化能力與泛用性。這些函數中,大部分函數具有可調整各種不同維度的設計,從50維到10,000維;少部分函數是固定型的維度,從2維到13維不等。實驗結果顯示本研究所提出之GD-WOA具有優異的搜尋能力表現而且具有良好的穩定性,特別是在高維度函數最佳化。實驗的結果與文獻中的多個知名最佳化方法進行效能比較,顯示本研究所提出之GD-WOA演算法有非常優秀的表現。
摘要(英) In recent years, machine learning has significantly improved in terms of capabilities, resulting in better performance for artificial intelligence systems, such as neural networks, which means that the number of parameters in the model has increased significantly. In the other words, the depth of the hierarchical layer of the neural networks model increases and the number of neurons in each layer increases, then the difficulty of tuning model is coming with that the model parameters raise steeply and become more difficult to find the optimal solution in the machine learning process. Therefore, the research on the optimization algorithm that can deal with optimizing high-dimensional parameters becomes more important. This study proposed an improved algorithm called “Gaussian Distribution based Whale Optimization Algorithm (GD-WOA).” Although the original whale optimization algorithm (WOA) has a good optimization ability and it has simple optimization strategy, but we found through experiments that the optimization ability gradually becomes insufficient as the parameter dimension increases. In addition, WOA has shortcomings in the ability to handle local optimal solutions and the versatility of optimization problems. In light of this, the GD-WOA improves WOA with two strategies. One is to establish a Gaussian random distribution at the position of the best whale during the searching process, and to generate a new position, thus making it as a new position that whales try to approach. Another strategy is to use a randomized approach to expand search. Especially when the search process encounters a local optimal solution, it can mitigate the risk of optimization stagnation through this strategy. In this study, we use 38 unconstrained functions and 30 constrained functions to test the optimization ability and generality of GD-WOA when searching optimal solution. Most of these functions have designs that can be adjusted in a variety of different dimensions, from 50 dimensions to 10,000 dimensions; a small number of functions are fixed dimensions ranging from 2 dimensions to 13 dimensions. The results after experiments show that the GD-WOA proposed in this study has excellent search performance and good stability, especially in the optimization of high dimensional functions. The results of the experiment are compared with the performance of several well-known optimization methods in the literature, showing that the GD-WOA algorithm proposed in this study has excellent performance.
關鍵字(中) ★ 鯨群最佳化演算法
★ 高斯分布
★ 最佳化演算法
★ 機器學習
關鍵字(英) ★ Whale optimization algorithm
★ Gaussian distribution
★ Optimization algorithm
★ Machine Learning
論文目次 摘要 i
Abstract ii
致謝 iii
圖目錄 vi
表目錄 vii
符號與專有名詞說明 ix
一、緒論 1
1-1 研究背景與目的 1
1-2 研究方法 2
1-3 論文架構 2
二、文獻探討 3
2-1 機器學習(Machine learning) 3
2-2 類神經網路(Artificial Neural Networks, ANN) 4
2-3 最佳化演算法(Optimization algorithms) 4
2-4 鯨群最佳化演算法(Whale optimization algorithm, WOA) 6
2-4-1 探索階段(Exploration phase) 7
2-4-2 搜尋階段(Exploitation phase) 7
三、高斯分布鯨群最佳化演算法 9
3-1 高斯分布產生新位置 9
3-2 增強鯨群的群體經驗 10
3-3 GD-WOA之演算法設計 10
四、實驗 16
4-1 無約束型函數實驗 17
4-1-1 無約束型函數實驗之結果 23
4-2 約束型函數實驗 32
4-2-1 約束型函數實驗之設定 33
4-2-2 約束型函數實驗之結果 37
4-3 與其它文獻比較之實驗 48
4-3-1 與其他文獻比較之實驗結果 49
五、實驗結果與討論 54
六、結論與建議 56
參考文獻 57
參考文獻 [1] P. J. Werbos, “Backpropagation through time: what it does and how to do it,” Proceedings of the IEEE, vol. 78, no. 10, pp. 1550-1560, 1990.
[2] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533-536, 1986/10/01 1986.
[3] S. Mirjalili and A. Lewis, “The Whale Optimization Algorithm,” Advances in Engineering Software, vol. 95, pp. 51-67, 2016/05/01/ 2016.
[4] I. Aljarah, H. Faris, and S. Mirjalili, “Optimizing connection weights in neural networks using the whale optimization algorithm,” Soft Computing, vol. 22, no. 1, pp. 1-15, 2018/01/01 2018.
[5] K. b. o. Medani, S. Sayah, and A. Bekrar, “Whale optimization algorithm based optimal reactive power dispatch: A case study of the Algerian power system,” Electric Power Systems Research, vol. 163, pp. 696-705, 2018/10/01/ 2018.
[6] H. Zhao, S. Guo, and H. Zhao, “Energy-Related CO2 Emissions Forecasting Using an Improved LSSVM Model Optimized by Whale Optimization Algorithm,” Energies, vol. 10, no. 7, 2017.
[7] I. B. Goodfellow, Yoshua; Courville, Aaron, “Machine Learning Basics,” in Deep Learning: MIT Press, 2016, pp. 96-104.
[8] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, p. 436, 05/27/online 2015.
[9] J. McCarthy, “Programs with common sense Symposium on Mechanization of Thought Processes,” National Physical Laboratory, Teddington, England, 1958.
[10] F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychological review, vol. 65, no. 6, p. 386, 1958.
[11] L. Steels, “The Artificial Life Roots of Artificial Intelligence,” Artificial Life, vol. 1, no. 1_2, pp. 75-110, 1993.
[12] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[13] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[14] J. H. Holland, “Adaption in Natural and Artificial Systems,” An Introductory Analysis with Application to Biology, Control and Artificial Intelligence, 1975 1975.
[15] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of ICNN′95 - International Conference on Neural Networks, 1995, vol. 4, pp. 1942-1948 vol.4.
[16] M. Dorigo and G. D. Caro, “Ant colony optimization: a new meta-heuristic,” in Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), 1999, vol. 2, pp. 1470-1477 Vol. 2.
[17] D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459-471, 2007/11/01 2007.
[18] E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, “GSA: A Gravitational Search Algorithm,” Information Sciences, vol. 179, no. 13, pp. 2232-2248, 2009/06/13/ 2009.
[19] R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems,” Computer-Aided Design, vol. 43, no. 3, pp. 303-315, 2011/03/01/ 2011.
[20] R. Cheng and Y. Jin, “A social learning particle swarm optimization algorithm for scalable optimization,” Information Sciences, vol. 291, pp. 43-60, 2015/01/10/ 2015.
[21] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671-680, 1983.
[22] K. Socha and M. Dorigo, “Ant colony optimization for continuous domains,” European Journal of Operational Research, vol. 185, no. 3, pp. 1155-1173, 2008/03/16/ 2008.
[23] C. Li, R. Priemer, and K. H. Cheng, “Optimization by random search with jumps,” International Journal for Numerical Methods in Engineering, vol. 60, no. 7, pp. 1301-1315, 2004.
[24] D. Karaboga and B. Basturk, “Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems,” in Foundations of Fuzzy Logic and Soft Computing, Berlin, Heidelberg, 2007, pp. 789-798: Springer Berlin Heidelberg.
[25] X. Yang and D. Suash, “Cuckoo Search via Lévy flights,” in 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), 2009, pp. 210-214.
[26] C. A. Coello Coello, “Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 11, pp. 1245-1287, 2002/01/04/ 2002.
[27] B. Tessema and G. G. Yen, “An Adaptive Penalty Formulation for Constrained Evolutionary Optimization,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 39, no. 3, pp. 565-578, 2009.
[28] A.-R. Hedar and M. Fukushima, “Derivative-Free Filter Simulated Annealing Method for Constrained Continuous Global Optimization,” Journal of Global Optimization, vol. 35, no. 4, pp. 521-549, 2006/08/01 2006.
[29] C. A. Floudas and P. M. Pardalos, A collection of test problems for constrained global optimization algorithms. Springer Science & Business Media, 1990.
[30] Q. He and L. Wang, “An effective co-evolutionary particle swarm optimization for constrained engineering design problems,” Engineering Applications of Artificial Intelligence, vol. 20, no. 1, pp. 89-99, 2007/02/01/ 2007.
[31] H. Rakhshani and A. Rahati, “Snap-drift cuckoo search: A novel cuckoo search optimization algorithm,” Applied Soft Computing, vol. 52, pp. 771-794, 2017/03/01/ 2017.
[32] Y. Ling, Y. Zhou, and Q. Luo, “Lévy Flight Trajectory-Based Whale Optimization Algorithm for Global Optimization,” IEEE Access, vol. 5, pp. 6168-6186, 2017.
[33] J. Luo and B. Shi, “A hybrid whale optimization algorithm based on modified differential evolution for global optimization problems,” Applied Intelligence, 2018/12/19 2018.
[34] A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, and H. Chen, “Harris hawks optimization: Algorithm and applications,” Future Generation Computer Systems, vol. 97, pp. 849-872, 2019/08/01/ 2019.
指導教授 李俊賢(Chunshien Li) 審核日期 2019-7-16
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明