摘要: | 近年來機器學習在能力上有顯著提升,為人工智慧系統,例如類神經網路,帶來更佳的效能表現;這意味著計算模型的參數數量大量增加。換言之,神經系統模型的階層深度增加以及各層神經元數量劇增,伴隨而來的問題是模型參數驟增使機器學習過程尋找最佳解的難度提高。因此具有搜尋高維度參數解的最佳化演算法之研究,越顯出其重要性。本研究提出一種改良式的演算法,稱為高斯分布鯨群最佳化演算法(Gaussian Distribution based Whale Optimization Algorithm, GD-WOA)。原來的鯨群最佳化演算法(WOA)雖然具有不錯的搜尋能力以及簡約的最佳化策略,但是經過實驗測試發現隨著最佳化問題的參數維度提高,其優化能力逐漸顯得不足。除此之外,WOA對於局部最佳解的處理能力以及最佳化問題的泛用性亦有不足之處。有鑑於此,本論文所提出之GD-WOA,係以兩種策略改良WOA,其一是在搜尋過程中表現最佳的鯨魚位置建立高斯隨機分布並由此分布產生一個新位置,使之成為鯨群趨近之標的。另一策略是使用隨機式擴大搜尋方式,亦即,GD-WOA演算法在整個搜尋最佳解的過程中保有一定的搜尋能力;特別是在搜尋過程遭遇局部最佳解的情況,透過此策略可以減輕優化停滯的風險。在本研究中我們使用38個無約束型函數與30個約束型函數檢驗GD-WOA搜尋最佳解之優化能力與泛用性。這些函數中,大部分函數具有可調整各種不同維度的設計,從50維到10,000維;少部分函數是固定型的維度,從2維到13維不等。實驗結果顯示本研究所提出之GD-WOA具有優異的搜尋能力表現而且具有良好的穩定性,特別是在高維度函數最佳化。實驗的結果與文獻中的多個知名最佳化方法進行效能比較,顯示本研究所提出之GD-WOA演算法有非常優秀的表現。;In recent years, machine learning has significantly improved in terms of capabilities, resulting in better performance for artificial intelligence systems, such as neural networks, which means that the number of parameters in the model has increased significantly. In the other words, the depth of the hierarchical layer of the neural networks model increases and the number of neurons in each layer increases, then the difficulty of tuning model is coming with that the model parameters raise steeply and become more difficult to find the optimal solution in the machine learning process. Therefore, the research on the optimization algorithm that can deal with optimizing high-dimensional parameters becomes more important. This study proposed an improved algorithm called “Gaussian Distribution based Whale Optimization Algorithm (GD-WOA).” Although the original whale optimization algorithm (WOA) has a good optimization ability and it has simple optimization strategy, but we found through experiments that the optimization ability gradually becomes insufficient as the parameter dimension increases. In addition, WOA has shortcomings in the ability to handle local optimal solutions and the versatility of optimization problems. In light of this, the GD-WOA improves WOA with two strategies. One is to establish a Gaussian random distribution at the position of the best whale during the searching process, and to generate a new position, thus making it as a new position that whales try to approach. Another strategy is to use a randomized approach to expand search. Especially when the search process encounters a local optimal solution, it can mitigate the risk of optimization stagnation through this strategy. In this study, we use 38 unconstrained functions and 30 constrained functions to test the optimization ability and generality of GD-WOA when searching optimal solution. Most of these functions have designs that can be adjusted in a variety of different dimensions, from 50 dimensions to 10,000 dimensions; a small number of functions are fixed dimensions ranging from 2 dimensions to 13 dimensions. The results after experiments show that the GD-WOA proposed in this study has excellent search performance and good stability, especially in the optimization of high dimensional functions. The results of the experiment are compared with the performance of several well-known optimization methods in the literature, showing that the GD-WOA algorithm proposed in this study has excellent performance. |