參考文獻 |
[1]. 林大貴(2017)。博碩出版社,TensorFlow + Keras深度學習人工智慧實務應用。
[2]. 周志華(2016)。清華大學出版社,機器學習。
[3]. 黃安埠(2017)。電子工業出版社,深入淺出深度學習-原理剖析與python實踐。
[4]. 鄭澤宇、顧思宇(2017)。電子工業出版社,Tensorflow實戰Google深度學習框架。
[5]. Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. (2015). Fast and accurate deep network learning by exponential linear units.
[6]. Tom M Michell. (1997). Machine Learning. McGraw-Hill, Inc.
[7]. Friedman J. (1994). Flexible metric nearest neighbor classification. Technical Report.
[8]. Hastie T, Tibshirani R, Friedman J. (2001). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag.
[9]. Cortes C, Vapnik V. (1995). Support-vector networks. Kluwer Academic Publishers.
[10]. Rabiner L, Juang B. (1986). An introduction to hidden markov Models. IEEE ASSP Magazine, January 1986.
[11]. Michael Kearns, Yishay Mansour and Andrew Y.NG. (1999). A sparse sampling algorithm for near-optimal planning in large Markov decision processes. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence.
[12]. Hinton G.E., Osindero S. and Teh Y. (2006). A fast learning algorithm for deep belief nets.Neural Computation.
[13]. LeCun Y., Bengio Y. and Hinton G.E. Deep Learning Nature.
[14]. He K, Zhang X, Ren S, Sun J. (2015).Deep Residual Learning for Image Recognition. Choy, S. (2001). Students whose parents did not go to college: Postsecondary Access, Persistence, and Attainment (NCES2001-126).
[15]. R.Sutton et al.(1998). Reinforcement learning: An introduction.
[16]. Geoffrey E. Hinton, Simon Osindero, Yee-Whye The. (2006). A Fast learning algorithm for deep belief nets,1527-1554.
[17]. Andrew Ng. (2011). Unsupervised Feature Learning and Deep Learning.
[18]. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sum. (2015). Deep Residual Learning for Image Recognition.
[19]. Reed R. D, R. J. Marks. (1998). Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks.
[20]. TensorFlow-Google’s latest machine learning system.
[21]. Tickle, A. B, R. Andrews, M. Golea, and J. Diederich. (1998). The true will come to light: Direction and challenge in extracting the knowledge embedded within trained artificial neural networks. IEEE Transactions on Neural Networks,9(6):1057-1067.
[22]. Kaare Brandt Petersen, Michael Syskind Pederson. (2012). The Matrix Cookbook.
[23]. ET Jaynes. (2003). Probability Theory: The Logic of Science.
[24]. Thomas M Cover. (2006). Elements of Information Theory 2nd Edition.
[25]. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein. Introduction to Algorithms, Third edition.
[26]. Andrew Ng, John Duchi. CS229: Machine Learning.
[27]. McCullagh, Peter; Nelder, John. (1989). Generalized Linear Models, Second Edition.
[28]. Cortes, C.; Vapink, V. (1995). Support-vector networks,273-397.
[29]. Qian, N. (1999). On the momentum term in gradient descent learning algorithm,145-151.
[30]. Nesterov, Y. (1983). A method for unconstrained convex minimization problem with the rate of convergence o(1/k2),543-547.
[31]. Sebastain Ruder. (2016). An overview of gradient descent optimization algorithms.
[32]. LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey. (2015). Deep learning,436-444.
[33]. McCulloch, W. S. and Pitts, W. H. (1943). A logical calculus of the ideas immanent in nervous activity,115-133.
[34]. Kurt Hornik. (1991). Approximation Capabilities of Multilayer Feedforward Networks,251-257.
[35]. Haykin, Simon. (1998). Neural Networks: A Comprehensive Foundation.
[36]. Hassoun, M. (1995). Fundamentals of Artificial Neural Networks MIT Press,48.
[37]. Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio. (2013). Maxout Networks,1319-1327.
[38]. Rosenblatt, Frank. (1957). The Perceptron—a perceiving and recognizing automaton.
[39]. Xavier Glorot, Antoine Bordes, Yoshua Bengio. (2011). Deep Sparse Rectifier Neural Netxorks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,315-323.
[40]. Andrew L. Mass, Awni Y. Hannun, Andrew Y. Ng. (2013). Rectifier Nonlinearities Improve Neural Network Acoustic Models.
[41]. Andrej Karpathy, Justin Johnson. Convolutional Neural Networks for Visual Recognition.
[42]. Yann LeCun, Leon Botton, Genevieve B. Orr, Klaus-Robert Miller. (1998). Efficient BackProp.
[43]. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.
[44]. Rumelhart, David E. Hinton, Geoffery E, ; Williams, Ronald J.(1986). Learning representations by back-propagating errors.
[45]. Kaiming He, Xianyu Zhang, Shaoqing Ren, and Jian SUN. (2016). Deep Residual Learning for Image Recogition.
[46]. Kaiming He, Xianyu Zhang, Shaoqing Ren, and Jian SUN. (2016). Identity Mappings in Deep Residual Networks.
[47]. Sergey Ioffe, Christian Szegedy. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.
[48]. Srivastava N, Hinton G, Krizhevsky A, et al. (2014). Dropout: A simple way to prevent neural networks from overfitting.
[49]. D. H. Hubel and T. N. Wiesel. (1968). Receptive fields and functional architecture of monkey striate cortex.
[50]. K. Fukushima. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position.
[51]. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, and L. D. Jackel. (1990). Handwritten digit recognition with a backpropagation network.
[52]. Y. LeCun, L. Bottou, Y. Bengio, and P. Hanffner. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE.
[53]. R. Hecht-Nielsen. (1989). Theory of the backpropagation neural network.
[54]. A. Krizhevsky, I. Sutskever, and G. E. Hinton. (2012). Imagenet classification with deep convolutional neural networks.
[55]. M. D. Zeiler and R. Fergus. (2014). Visualizing and understanding convolutional networks.
[56]. K. Simonyan, A. Zisserman. (2015). Very deep convolutional networks for large-scale image recognition.
[57]. C. Szegedy, W. Liu, Y. Jia, P. Semanet, S.Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich. (2014). Going deeper with convolutions.
[58]. K. He, X. Zhang, S. Ren, and J. Sun. (2015). Deep residual learning for image recognition.
[59]. Rafael C. Gonzalez, Richard E. Woods. Digital Image Processing, 3th edition.
[60]. Matthew D. Zeiler and Rob Fergus. (2014). Visualizing and Understanding Convolutional Networks,818-833.
[61]. Dominik Scherer, Andresas Muller, and Sven Behnke. (2010). Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition.
[62]. Kumar Chellapilla, Sidd Puri, Patrice Simard. (2006). High Performance Convolutional Neural Networks for Document Processing.
[63]. N Kalchbrenner, E Grefenstette, P Blunsom. (2014). A Convolutional Neural Network for Modelling Sentences. |