博碩士論文 91541001 詳細資訊


姓名 楊盛松(Sheng-Sung Yang)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 多層感知器對輸入與權值誤差的敏感度分析及倒傳遞(BP)演算法與進化策略(ES)演算法的改善
(Sensitivity Analysis of the Multilayer Perceptron due to the Errors of the Inputs and Weights & Improvements in BP and ES Algorithms)
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 多層感知器(Multilayer Perceptron),簡稱為MLP,常被用於一些演算法中,例如:倒傳遞演算法(Back-Propagation Algorithm, 簡稱BP)、進化演算法(Evolutionary Algorithm, 簡稱EA)及快速學習法(Extreme Learning Machine, 簡稱ELM)等。其中倒傳遞演算法及進化演算法是最常使用MLP架構的,且它們的性能表現往往會受到MLP架構的影響。因此,決定MLP的架構(層數及每一層神經元的個數)對這些演算法是一件非常重要的事。本論文的重點就是探討當MLP中的輸入(input)及層與層神經元之間的權值(weight)改變時,MLP輸出的變化量,亦即輸出對輸入變化及層與層神經元之間權值變化的敏感度(sensitivity),針對不同的MLP架構,此敏感度也不同。藉由此敏感度的大小,可選擇一個較適合的MLP架構。對MLP敏感度的研究,我們採用中央極限定理(Central Limit Theorem, 簡稱CLT)來幫助我們做統計上的運算,同時CLT也可用於計算分開複數(Split-Complex)MLP架構的敏感度,此種MLP架構簡稱為Split-CMLP,可用於複數訊號系統,例如QPSK訊號系統等。因此,本論文同時分析一般MLP及Split-CMLP對輸入變化及層與層神經元之間權值變化的敏感度。在論文的後半部份,我們結合了階層式(hierarchical)結構及倒傳遞演算法(BP)來改善標準BP演算法的性能,此新的演算法稱為HBP演算法;同時也為目前使用極為廣泛的進化演算法-進化策略(Evolutionary Strategy, 簡稱ES)決定運算參數,以增進其性能。
摘要(英) Multilayer Perceptron (MLP) is often used in some algorithms such as Back-Propagation (BP) algorithm, Evolutionary algorithm (EA), Extreme Learning Machine (ELM) algorithm, etc. Among these algorithms, BP and EA algorithms are more commonly operated in MLP structures to implement some applications than ELM is. Furthermore, the used MLP structures always affect the performances of these algorithms. Therefore, it is a substantial work to decide a feasible MLP structure (i.e., to decide number of layers and number of neurons in each layer) for each one of these algorithms. The main work of this dissertation is to analyze the adjustment of the output of the MLP due to the adjustments of the inputs and the weights between the neurons in adjacent layers (i.e., to analyze the sensitivity of the MLP due to the errors of the inputs and weights). Different MLP structure will lead to different sensitivity value. Based on the sensitivity values, it is feasible to choose a proper MLP structure for the related algorithm. In order to study the sensitivity of a MLP, we use the Central Limit Theorem (CLT) in the statistical computation of the sensitivity. Moreover, the CLT can also be extended to the sensitivity computation of the split-complex MLP (Split-CMLP); the Split-CMLP can be used in a complex signal system such as QPSK signal system, etc. Therefore, we analyze the sensitivity of both MLP and Split-CMLP in this dissertation. On the other hand, we combine the hierarchical structure and BP algorithm in this dissertation to improve the performance of the standard BP algorithm, and this new algorithm is named as HBP algorithm. Additionally, we also introduce an approach in this dissertation to decide the operation parameters of the Evolutionary Strategy (ES) algorithm-the most popular one of the Evolutionary algorithms, to improve its performance.
關鍵字(中) ★ 敏感度
★ 多層感知器
★ 倒傳遞演算法
★ 進化策略演算法
關鍵字(英) ★ evolutionary strategy
★ back-propagation
★ multilayer perceptron
★ sensitivity
論文目次 Chapter 1 Introduction 1
1.1 Motivation of the Dissertation 1
1.2 Overview of the Dissertation 4
1.3 Organization of the Dissertation 6
Chapter 2 Sensitivity of the Multilayer Perceptron due to the Errors of the Inputs and Weights 8
2.1 MLP Model 8
2.2 Sensitivity Computation of the MLP 11
2.3 Sensitivity Analysis of the MLP 22
2.4 Summary 31
Chapter 3 Sensitivity of the Split-Complex valued Multilayer Perceptron due to the Errors of the Inputs and Weights 33
3.1 Split-CMLP Model 33
3.2 Sensitivity Computation of the Split-CMLP 37
3.3 Analysis of the Sensitivity for the Split-CMLP 49
3.4 Summary 59
Chapter 4 Hierarchical Back-Propagation Algorithm for an MLP Decision Feedback Equalizer 61
4.1 MLP-Based DFE 61
4.2 Hierarchical BP (HBP) Algorithm 65
4.3 Computer Simulations 72
4.4 Summary 80
Chapter 5 Improving the Evolutionary Strategy (ES) Algorithm by Choosing Appropriate Parameters 81
5.1 Evolutionary Strategy (ES) 81
5.2 Analysis of Mutation Rates 83
5.3 Simulation Results 85
5.4 Summary 93
Chapter 6 Conclusions 95
References 98
Appendix A 104
Appendix B 109
Appendix C Author’s Information 113
Appendix D Publication List 114
參考文獻 [1] M. Stevenson, R. Winter, and B. Widrow, “Sensitivity of feedforward neural networks to weight errors,” IEEE Trans. Neural Networks, vol. 1, pp. 71-80, Mar. 1990.
[2] A. Y. Cheng and D. S. Yeung, “Sensitivity analysis of neocognitron,” IEEE Trans. Syst., Man, Cybern. C, vol. 29, pp. 238-249, May. 1999.
[3] S. W. Piche, “The selection of weight accuracies for Madalines,” IEEE Trans. Neural Networks, vol. 6, pp. 432-445, Mar. 1995.
[4] S. Hashem, “Sensitivity analysis for feedforward artificial neural networks with differentiable activation functions,” in Proc. IJCNN’92, vol. 1, Baltimore, MD, 1992, pp. 419-424.
[5] L. Fu and T. Chen, “Sensitivity analysis for input vector in multilayer feedforward neural networks,” in Proc. IEEE Int. Conf. Neural Networks, vol. 1, San Francisco, CA, 1993, pp. 215-218.
[6] J. M. Zurada, A. Malinowski, and S. Usui, “Perturbation method for deleting reduntant inputs of perceptron networks,” Neurocomput., vol. 14, pp. 177-193, 1997.
[7] A. P. Engelbrecht and I. Cloete, “A sensitivity analysis algorithm for pruning feedforward neural networks,” in Proc. IEEE Int. Conf. Neural Networks, vol. 2, Washington, DC, 1996, pp. 1274-1277.
[8] A. P. Engelbrecht, L. Fletcher, and I. Cloete, “Variance analysis of sensitivity information for pruning feedforward neural networks,” in Proc. IEEE Int. Conf. Neural Networks, Washington, DC, 1999, pp.1829-1833.
[9] J. Y. Choi and C.-H. Choi, “Sensitivity analysis of multilayer perceptron with differentiable activation functions,” IEEE Trans. Neural Networks, vol. 3, pp. 101-107, Jan. 1992.
[10] S.-H. Oh and Y. Lee, “Sensitivity analysis of a single hidden-layer neural networks with threshold function,” IEEE Trans. Neural Networks, vol. 6, pp. 1005-1007, July 1995.
[11] Xiaoqin Zeng and Daniel S. Yeung, “Sensitivity analysis of multilayer perceptron to input and weight perturbations,” IEEE Trans. Neural Networks, vol. 12, pp. 1358-1366, Nov. 2001.
[12] Xiaoqin Zeng, Yingfeng Wang, and Kang Zhang, “Computation of Adalines’ Sensitivity to Weight Perturbation,” IEEE Trans. Neural Networks, vol. 17, pp. 515-519, Mar. 2006.
[13] Sheng-Sung Yang, Chia-Lu Ho and Sammy Siu, “Sensitivity analysis of the Split-Complex valued multilayer perceptron due to the errors of the i.i.d. inputs and weights,” IEEE Trans. Neural Networks, vol. 18, pp. 1280-1293, September 2007.
[14] S.Siu, G.Gibson, and C.Cowan, “Decision feedback equalization using neural network structures and performance comparison with standard architectures,” IEE proceedings, Vol. 137, part.1, No. 4, pp. 221-225, 1990.
[15] S.Siu, and C.F.N.Cowan, “Performance analysis of the norm back propagation algorithm for adaptive equalization,” IEE Proceedings-F, Vol. 140, No.1, February, pp.43- 47, 1993.
[16] S.Siu, C.H.Chang, and C.H.Wei, “ Norm Back Propagation Algorithm for Adaptive Equalization,” IEEE Trans. on Circuits and Systems II, Vol. 42, No. 9, pp. 604-607, September 1995.
[17] G.J.Gibson, S. Siu, and C.F.N. Cowan, “The application of nonlinear structures to the reconstruction of binary signals,” IEEE Trans. Signal Processing, Vol. 39, No.8, pp. 1877-1884, 1991.
[18] R.P.Lippmann, “An introduction to computing with neural nets,” IEEE ASSP Magazine, Vol. 4, No. 2, pp. 4-22, 1987.
[19] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine,” in Technical Report ICIS/03/2004, (School of Electrical and Electronic Engineering, Nayang Technological University, Singapore), Jan. 2004.
[20] T.-K. Woo, “ Fast hierarchical least mean square algorithm,” IEEE Signal Processing Lett., Vol.8, pp. 289-291, Nov. 2001.
[21] T.-K. Woo, “ HRLS: A more efficient RLS algorithm for adaptive FIR filtering,” IEEE Commun. Lett., Vol. 5, pp. 81-84, Mar. 2001.
[22] V. H. Nascimento, “ Analysis of the hierarchical LMS algorithm,” IEEE Signal Processing Lett., Vol. 10, pp.78-81, Mar. 2003.
[23] S.-S. Yang, C.-L. Ho and C.-M. Lee, “HBP: Improvement in BP Algorithm for an Adaptive MLP Decision Feedback Equalizer, ” IEEE Trans. on Circuits and Systems II, vol. 53, no.3, pp.240-244, Mar. 2006.
[24] P. Power, F. Sweeney, and C.F.N. Cowan, “EA crossover schemes for a MLP channel equalizer,” Electronics, Circuits and Systems, 1999, Proceedings of ICECES’99, The 6th IEEE International Conference, vol.1, pp.407-410, 1999.
[25] T. Back, Evolutionary algorithm in theory and practice: evolution strategies, evolution programming, genetic algorithms, Oxford, 1996.
[26] T.Back and H.-P.Schwefel, “Evolutionary computation: An overview,” in Proc. IEEE Int. Conf. Evolutionary Computation, pp.20-29, 1996.
[27] Hans-Georg Beyer, The Theory of Evolution Strategies, Spring, 2001.
[28] Hussein A. Abass, “Speeding Up Back-propagation Using Multi-objective Evolutionary Algorithms,” Neural Computation, vol.15, no.11, pp.2704-2726, November, 2003.
[29] S.C.Chan, W.Liu, and K.L.Ho, “Multiplierless Perfect Reconstruction Modulated Filter Banks with Sum-of-Powers-of-Two Coefficients,” IEEE Signal Processing Letters, vol.8, no.6, pp.163-166, 2001.
[30] R.Thamvichai, Tamal Bose, and Randy L. Haupt, “Design of 2-D Multiplierless IIR Filters Using the Genetic Algorithm,” IEEE Trans. on Circuits and Systems I, vol.49, no.6, pp.878-882, 2002.
[31] Y.-H. Lee, M. Kawamata, and T. Higuchi, “GA-based design of multiplierless 2-D state-space digital filters with low roundoff noise,” Proc. IEEE., vol.145, pp.118-124, 1998.
[32] V. Schnecke and O. Vornberger, “Genetic design of VLSI-Layouts,” in Proc. Int. Conf. Genetic Algorithms in Engineering Systems: Innovations and Applications, pp.430-435, 1995.
[33] S.Siu, Chia-Lu Ho, and C.M.Lee, “TSK Based Decision Feedback Equalizer using Evolutionary Algorithms Applied to QAM Communication Systems”, IEEE Trans. on Circuits and Systems II, vol. 52, no.9, pp.596-600, Sept. 2005.
[34] Athanasios Papoulis, Probability, random variables, and stochastic processes, 3rd Edition, McGraw-Hill, Inc. 1991.
[35] Kim T. and Adali T., “ Fully Complex Backpropagation for Constant Envelop Signal Processing, ” Proc. of IEEE Workshop on Neural Networks for Sig. Proc., pp. 231-240, Sydney, Dec. 2000.
[36] Kim T. and Adali T., “ Nonlinear Satellite Channel Equalization Using Fully Complex Feed-Forward Neural Networks, ” Proc. of IEEE Workshop on Nonlinear Signal and Image Processing, pp. 141-150, Baltimore, June, 2001.
[37] H. Leung and S. Haykin, “The complex backpropagation algorithm,” IEEE Trans. Signal Processing, vol. 39, pp. 2101-2104, 1991.
[38] N. Benvenuto and F. Piazza, “On the complex backpropagation algorithm,” IEEE Trans. Signal Processing, vol. 40, pp. 967-969, 1992.
[39] C. H. Chang, S. Siu and C. H. Wei, “Decision feedback equalization using complex backpropagation algorithm,” in Proc. of IEEE International Symposium on Circuits and Systems, Hong Kong, pp. 589-592, June 1997.
[40] S. Tamura and M. Tateishi, “Capabilities of a four-layered feedforward neural network: Four layers versus three,” IEEE Transactions on Neural Networks, vol. 8, no. 2, pp. 251-255, 1997.
[41] S. Siu, S.-S. Yang, C.-M. Lee, and C.-L. Ho, “Improving the Back-Propagation Algorithm Using Evolutionary Strategy,” IEEE Trans. on Circuits and Systems II, vol. 54, no.2, pp.171-175, Feb. 2007.
[42] H.-G. Byer and K. Deb, “On Self-Adaptive Features in Real-Parameter Evolutionary Algorithms,” IEEE Trans. on Evolutionary Computation, vol.5, no.3, pp.250-270, 2001.
指導教授 賀嘉律(Chia-Lu Ho) 審核日期 2007-12-1
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡