博碩士論文 109225004 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:54 、訪客IP:3.144.116.159
姓名 黃文杰(Wen-Jie Huang)  查詢紙本館藏   畢業系所 統計研究所
論文名稱
(On the Study of Feedforward Neural Networks: an Experimental Design Approach)
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 深度神經網路是一種重要的機器學習工具,在許多領域具有優異的表現。然而, 由於其黑盒性質,讓使用者很難了解輸入數據如何驅動神經網路。新的深度神經網路 技術被開發與應用,解釋深度神經網路決策過程的新方法也逐漸發展成為活躍的研究 領域。Gevrey,Dimopoulos,andLek(2003) 與 Pizarroso, Portela,andMuñoz(2020) 提 供計算敏感度的方法及神經解釋圖說明輸入與輸出的關係,然而這種方法並未得到廣 泛應用。在本論文中,我們從統計的角度探討了通用近似定理。此外,我們提出一種 使用實驗設計技術來解釋深度神經網路輸入與輸出關係的新方法,並透過模擬和真實 例子來展示。
摘要(英) Deep neural networks (DNNs) are an essential machine learning tool with excellent performance in many fields. However, due to its black-box nature, it is difficult for users to understand how the input data drives the neural network. New DNN technologies are developed and applied, and new explainable methods have been an active research field. Gevrey etal.(2003) and Pizarrosoetal.(2020) provide methods for evaluating sensitivity and the neural interpretation diagram to illustrate the relationship between inputs and outputs. However, these methods have not yet been heavily recognized. In this thesis, we tackle the universal approximation theorem based on a statistical perspective, In addition, we propose a new method to explain the relationship between the inputs and outputs of a DNN using the technique of experimental design. We finally illustrate the proposed method through simulations and real examples.
關鍵字(中) ★ 可解釋深度神經網路
★ 前饋式神經網路
★ 實驗設計
★ 完全因子設計
★ 通用近似定理
關鍵字(英) ★ Explainable deepneuralnetwork
★ Feedforwardneuralnetwork
★ Designof experiment
★ Fullfactorialdesign
★ Universalapproximationtheorem
論文目次 1 Introduction 1
2 LiteratureReview 2
3 UniversalApproximationforMulti-layerFNN 3
3.1 Oneinputvariable 5
3.1.1 Singlehiddenlayer 5
3.1.2 Twohiddenlayerswithsamehiddennodes 7
3.1.3 Twohiddenlayerswithdifferenthiddennodes 9
3.1.4 Threehiddenlayerswithdifferenthiddennodes 10
3.2 Twoinputvariable 13
3.3 Summary 15
4 ExplainableFNNonDesignofExperiment 16
4.1 Simulation 16
4.1.1 Drop-Wavefunciton 17
4.1.2 Hartmann4-dimensionalfunciton 20
4.2 Realdatastudy 23
4.2.1 AirfoilSelf-NoiseDataSet 23
4.2.2 PlasticDataset 25
5 Conclusion 27
References 28
參考文獻 Castillo, E., Sanchez-Marono, N., Alonso-Betanzos, A., & Castillo, C. (2007). Functional network topology learning and sensitivity analysis based on anova decomposition. Neural Computation, 19(1), 231–257.
Çetin, O., Temurtaş, F., & Gülgönül, Ş. (2015). An application of multilayer neural network on hepatitis disease diagnosis using approximations of sigmoid activation function. Dicle Medical Journal/Dicle Tip Dergisi, 42(2).
Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4), 303–314.
Dua, D., & Graff, C. (2017). UCI machine learning repository. Retrieved from http://archive.ics.uci.edu/ml
Eğrioğlu, E., Aladağ, Ç. H., & Günay, S. (2008). A new model selection strategy in artificial neural networks. Applied Mathematics and Computation, 195(2), 591–597.
Engelbrecht, A. P., & Cloete, I. (1996). A sensitivity analysis algorithm for pruning feedforward neural networks. In Proceedings of International Conference on Neural Networks (icnn’96) (Vol. 2, pp. 1274–1278).
Fan, J., Ma, C., & Zhong, Y. (2021). A selective overview of deep learning. Statistical science: a review journal of the Institute of Mathematical Statistics, 36(2), 264.
Fernández-Navarro, F., Carbonero-Ruz, M., Alonso, D. B., & Torres-Jiménez, M. (2016). Global sensitivity estimates for neural network classifiers. IEEE Transactions on Neural Networks and Learning Systems, 28(11), 2592–2604.
Fortuin, V., Garriga-Alonso, A., Wenzel, F., Rätsch, G., Turner, R., van der Wilk, M., & Aitchison, L. (2021). Bayesian neural network priors revisited. arXiv preprint arXiv:2102.06571.
Gevrey, M., Dimopoulos, I., & Lek, S. (2003). Review and comparison of methods to study the contribution of variables in artificial neural network models. Ecological Modelling, 160(3), 249–264.
Guidotti, E. (2020). calculus: High dimensional numerical and symbolic calculus in r. arXiv preprint arXiv:2101.00086.
Kowalski,P.A.,&Kusy,M.(2017).Sensitivityanalysisforprobabilisticneuralnetwork structure reduction. IEEE TransactionsonNeuralNetworksandLearningSystems, 29(5), 1919–1932.
Märtens, K., & Yau, C. (2020). Neural decomposition: Functional anova with variational autoencoders. In International Conference on Artificial Intelligence and Statistics (pp. 2917–2927).
Montavon, G., Lapuschkin, S., Binder, A., Samek, W., & Müller, K.-R. (2017). Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition, 65, 211–222.
Pizarroso, J., Portela, J., & Muñoz, A. (2020). Neuralsens: sensitivity analysis of neural networks. arXiv preprint arXiv:2002.11423.
Santner, T. J., Williams, B. J., & Notz, W. I. (2003). The Design and Analysis of Computer Experiments (Vol. 1). Springer Series in Statistics (SSS).
Shafi, I., Ahmad, J., Shah, S. I., & Kashif, F. M. (2006). Impact of varying neurons and hidden layers in neural network architecture for a time frequency application. In 2006 IEEE International Multitopic Conference (pp. 188–193).
Siddique, M. A. B., Khan, M. M. R., Arif, R. B., & Ashrafi, Z. (2018). Study and observation of the variations of accuracies for handwritten digits recognition with various hidden layers and epochs using neural network algorithm. In 2018 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT) (pp. 118–123).
Tibshirani, R., Hastie, T., Witten, D., & James, G. (2021). An Introduction to Statistical Learning: With Applications in R. Springer Texts in Statistics (STS, volume 103).
Xie, N., Ras, G., van Gerven, M., & Doran, D. (2020). Explainable Deep Learning: A Field Guide for the Uninitiated. Journal of Artificial Intelligence Research, 73, 329–39
指導教授 孫立憲 張明中(Li-Hsien Sun Ming-Chung Chang) 審核日期 2022-7-14
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明