博碩士論文 104221018 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:10 、訪客IP:3.21.231.245
姓名 李政瑜(Cheng-Yu Lee)  查詢紙本館藏   畢業系所 數學系
論文名稱 影像模糊方法在蝴蝶辨識神經網路中之應用
(Application of Image Blurring Method in Butterfly Identification Neural Network)
相關論文
★ 氣流的非黏性駐波通過不連續管子之探究★ An Iteration Method for the Riemann Problem of Some Degenerate Hyperbolic Balance Laws
★ 單一非線性平衡律黎曼問題廣義解的存在性★ 非線性二階常微方程組兩點邊界值問題之解的存在性與唯一性
★ 對接近音速流量可壓縮尤拉方程式的柯西問題去架構區間逼近解★ 一些退化擬線性波動方程的解的性質.
★ 擬線性波方程中片段線性初始值問題的整體Lipchitz連續解的★ 水文地質學的平衡模型之擴散對流反應方程
★ 非線性守恆律的擾動Riemann 問題的古典解★ BBM與KdV方程初始邊界問題解的週期性
★ 共振守恆律的擾動黎曼問題的古典解★ 可壓縮流中微黏性尤拉方程激波解的行為
★ 非齊次雙曲守恆律系統初始邊界值問題之整域弱解的存在性★ 有關非線性平衡定律之柯西問題的廣域弱解
★ 單一雙曲守恆律的柯西問題熵解整體存在性的一些引理★ 二階非線性守恆律的整體經典解
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 本研究旨在探討深度學習模型”多層感知神經網路”及”卷積神經網路”在圖像辨識上的訓練結果比較,也研讀當今常用的優化算法,熟知損失函數對參數更新的影響關係,兩個模型訓練將以蝴蝶圖片進行實作。
經由線上開放的圖片庫網站,取得9141張共五類蝴蝶,並自製成數據樣本集,分別帶入兩個深度模型,透過自行建立隱藏層結構,觀察兩者訓練時間及訓練準確率,在迭代結果上分析擬合情形。而後再進一步引入PCA降維方法,對數據預處理,看看對於圖片背景降維效果能否提高訓練或驗證準確度。
摘要(英) The goal of this thesis is to explore the training results of two deep learning models "multilayer perceptual neural network" and "convolution neural network" in image recognition, and study the popular optimization algorithms. To this topic, we study the influence of loss functions on iterative parameters, and demonstrate two models with the pictures of butterflies.
There are five types of butterflies with 9141 pictures obtained through the online database website. We use these pictures for the files of data samples to two deep learning models, and observe the training time and accuracy by establishing the hidden layers. Then, we analyze the fitting situation to the iterative results. Finally, by using principle component analysis(PCA) in dimension reduction method, we preprocess the data and observe the reduction effect of the image background so that we can improve the training or verification accuracy.
關鍵字(中) ★ 神經網路 關鍵字(英)
論文目次 摘要 i
Abstract ii
致謝 iii
目錄 iv
表目錄 v
圖目錄 vi
一、緒論 1
1-1 研究動機: 1
1-2 研究目的: 1
1-3 研究問題: 2
1-4 研究方法: 2
1-5 研究對象: 2
二、深度神經模型 3
2-1 前饋神經網路(Feedforward Neural Network) 3
2-2 CNN卷積神經網路 9
2-3 優化算法 11
三、系統環境與功能 15
3-1系統環境 15
3-2系統功能 16
四、數據處理與訓練 19
4-1 數據集製作 19
4-2 數據標準化 20
4-3 PCA數據降維 20
4-4 驗證模型準確率 24
五、結果討論與未來展望 33
5-1 結果討論 33
5-2 未來展望 35
參考文獻: 36
參考文獻 [1]. 林大貴(2017)。博碩出版社,TensorFlow + Keras深度學習人工智慧實務應用。
[2]. 周志華(2016)。清華大學出版社,機器學習。
[3]. 黃安埠(2017)。電子工業出版社,深入淺出深度學習-原理剖析與python實踐。
[4]. 鄭澤宇、顧思宇(2017)。電子工業出版社,Tensorflow實戰Google深度學習框架。
[5]. Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. (2015). Fast and accurate deep network learning by exponential linear units.
[6]. Tom M Michell. (1997). Machine Learning. McGraw-Hill, Inc.
[7]. Friedman J. (1994). Flexible metric nearest neighbor classification. Technical Report.
[8]. Hastie T, Tibshirani R, Friedman J. (2001). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag.
[9]. Cortes C, Vapnik V. (1995). Support-vector networks. Kluwer Academic Publishers.
[10]. Rabiner L, Juang B. (1986). An introduction to hidden markov Models. IEEE ASSP Magazine, January 1986.
[11]. Michael Kearns, Yishay Mansour and Andrew Y.NG. (1999). A sparse sampling algorithm for near-optimal planning in large Markov decision processes. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence.
[12]. Hinton G.E., Osindero S. and Teh Y. (2006). A fast learning algorithm for deep belief nets.Neural Computation.
[13]. LeCun Y., Bengio Y. and Hinton G.E. Deep Learning Nature.
[14]. He K, Zhang X, Ren S, Sun J. (2015).Deep Residual Learning for Image Recognition. Choy, S. (2001). Students whose parents did not go to college: Postsecondary Access, Persistence, and Attainment (NCES2001-126).
[15]. R.Sutton et al.(1998). Reinforcement learning: An introduction.
[16]. Geoffrey E. Hinton, Simon Osindero, Yee-Whye The. (2006). A Fast learning algorithm for deep belief nets,1527-1554.
[17]. Andrew Ng. (2011). Unsupervised Feature Learning and Deep Learning.
[18]. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sum. (2015). Deep Residual Learning for Image Recognition.
[19]. Reed R. D, R. J. Marks. (1998). Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks.
[20]. TensorFlow-Google’s latest machine learning system.
[21]. Tickle, A. B, R. Andrews, M. Golea, and J. Diederich. (1998). The true will come to light: Direction and challenge in extracting the knowledge embedded within trained artificial neural networks. IEEE Transactions on Neural Networks,9(6):1057-1067.
[22]. Kaare Brandt Petersen, Michael Syskind Pederson. (2012). The Matrix Cookbook.
[23]. ET Jaynes. (2003). Probability Theory: The Logic of Science.
[24]. Thomas M Cover. (2006). Elements of Information Theory 2nd Edition.
[25]. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein. Introduction to Algorithms, Third edition.
[26]. Andrew Ng, John Duchi. CS229: Machine Learning.
[27]. McCullagh, Peter; Nelder, John. (1989). Generalized Linear Models, Second Edition.
[28]. Cortes, C.; Vapink, V. (1995). Support-vector networks,273-397.
[29]. Qian, N. (1999). On the momentum term in gradient descent learning algorithm,145-151.
[30]. Nesterov, Y. (1983). A method for unconstrained convex minimization problem with the rate of convergence o(1/k2),543-547.
[31]. Sebastain Ruder. (2016). An overview of gradient descent optimization algorithms.
[32]. LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey. (2015). Deep learning,436-444.
[33]. McCulloch, W. S. and Pitts, W. H. (1943). A logical calculus of the ideas immanent in nervous activity,115-133.
[34]. Kurt Hornik. (1991). Approximation Capabilities of Multilayer Feedforward Networks,251-257.
[35]. Haykin, Simon. (1998). Neural Networks: A Comprehensive Foundation.
[36]. Hassoun, M. (1995). Fundamentals of Artificial Neural Networks MIT Press,48.
[37]. Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio. (2013). Maxout Networks,1319-1327.
[38]. Rosenblatt, Frank. (1957). The Perceptron—a perceiving and recognizing automaton.
[39]. Xavier Glorot, Antoine Bordes, Yoshua Bengio. (2011). Deep Sparse Rectifier Neural Netxorks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,315-323.
[40]. Andrew L. Mass, Awni Y. Hannun, Andrew Y. Ng. (2013). Rectifier Nonlinearities Improve Neural Network Acoustic Models.
[41]. Andrej Karpathy, Justin Johnson. Convolutional Neural Networks for Visual Recognition.
[42]. Yann LeCun, Leon Botton, Genevieve B. Orr, Klaus-Robert Miller. (1998). Efficient BackProp.
[43]. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.
[44]. Rumelhart, David E. Hinton, Geoffery E, ; Williams, Ronald J.(1986). Learning representations by back-propagating errors.
[45]. Kaiming He, Xianyu Zhang, Shaoqing Ren, and Jian SUN. (2016). Deep Residual Learning for Image Recogition.
[46]. Kaiming He, Xianyu Zhang, Shaoqing Ren, and Jian SUN. (2016). Identity Mappings in Deep Residual Networks.
[47]. Sergey Ioffe, Christian Szegedy. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.
[48]. Srivastava N, Hinton G, Krizhevsky A, et al. (2014). Dropout: A simple way to prevent neural networks from overfitting.
[49]. D. H. Hubel and T. N. Wiesel. (1968). Receptive fields and functional architecture of monkey striate cortex.
[50]. K. Fukushima. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position.
[51]. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, and L. D. Jackel. (1990). Handwritten digit recognition with a backpropagation network.
[52]. Y. LeCun, L. Bottou, Y. Bengio, and P. Hanffner. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE.
[53]. R. Hecht-Nielsen. (1989). Theory of the backpropagation neural network.
[54]. A. Krizhevsky, I. Sutskever, and G. E. Hinton. (2012). Imagenet classification with deep convolutional neural networks.
[55]. M. D. Zeiler and R. Fergus. (2014). Visualizing and understanding convolutional networks.
[56]. K. Simonyan, A. Zisserman. (2015). Very deep convolutional networks for large-scale image recognition.
[57]. C. Szegedy, W. Liu, Y. Jia, P. Semanet, S.Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich. (2014). Going deeper with convolutions.
[58]. K. He, X. Zhang, S. Ren, and J. Sun. (2015). Deep residual learning for image recognition.
[59]. Rafael C. Gonzalez, Richard E. Woods. Digital Image Processing, 3th edition.
[60]. Matthew D. Zeiler and Rob Fergus. (2014). Visualizing and Understanding Convolutional Networks,818-833.
[61]. Dominik Scherer, Andresas Muller, and Sven Behnke. (2010). Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition.
[62]. Kumar Chellapilla, Sidd Puri, Patrice Simard. (2006). High Performance Convolutional Neural Networks for Document Processing.
[63]. N Kalchbrenner, E Grefenstette, P Blunsom. (2014). A Convolutional Neural Network for Modelling Sentences.
指導教授 洪盟凱(Meng-Kai Hong) 審核日期 2018-10-30
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明