參考文獻 |
[1] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with
neural networks” Science Volume 313, July 2006.
[2] P. Vincent, H. Larochelle, Y. Bengio, and P. A. Manzagol, “Extracting and
Composing Robust Features with Denoising Autoencoders” Machine Learning,
Proceedings of the 25th International Conference (ICML) 2008, June 2008.
[3] Andrew Ng, “Sparse autoencoder” CS294A Lecture notes, 72(2011).
[4] A. Makhzani, and B. Frey, “k-Sparse Autoencoders”, ICLR 2014, Mar 2014
[5] M. Udommitrak and B. Kijsirikul, “Incremental Feature Construction for Deep
Learning Using Sparse Auto-Encoder”, International Journal of Electrical Energy,
Vol. 1, No. 3, pp. 173-176, September 2013
[6] P. Baldi, “Autoencoders, unsupervised learning, and deep architectures.” Journal
of Machine Learning Research, Workshop and Conference Proceedings,
Proceedings of the 2011 ICML Workshop on Unsupervised and Transfer Learning,
vol. 27, Bellevue, WA, pp. 37–50 (2012)
[7] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes”, International
Conference on Learning Representations (ICLR), 2014
[8] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive autoencoders: Explicit invariance during feature extraction”, ICML′11 Proceedings of the 28th International Conference on International Conference on Machine
Learning, page 833-840, June 2011.
[9] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, “Greedy Layer-Wise
Training of Deep Networks”, NIPS′06 Proceedings of the 19th International
Conference on Neural Information Processing Systems, page 153-160, Dec 2006.
[10] G. E. Hinton, A. Krizhevsky, and S. D. Wang, “Transforming Auto-encoders”,
Artificial Neural Networks and Machine Learning – ICANN 2011 pp 44-51, 2011
[11] G. Alain and Y. Bengio, “What regularized auto-encoders learn from the datagenerating distribution”, The Journal of Machine Learning Research vol 15 issue
1, page 3563-3593, Jan 2014.
[12] M. Tschannen, O. Bachem, and M. Lucic, “Recent Advances in AutoencoderBased Representation Learning”, NeurIPS 2018, Dec 2018.
[13] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational
Autoencoder for Deep Learning of Images, Labels and Captions”, Advances in
Neural Information Processing Systems 29 (NIPS 2016), Sep 2016.
[14] J. Li, M. T. Luong, and D. Jurafsky, “A Hierarchical Neural Autoencoder for
Paragraphs and Documents”, Proceedings of the 53rd Annual Meeting of the
Association for Computational Linguistics and the 7th International Joint
Conference on Natural Language Processing, July 2015.
[15] X. Lu, Y. Tsao, S. Matsuda, and C. Hori, “Speech Enhancement Based on Deep
Denoising Autoencoder”, INTERSPEECH 2013, January 2013.
[16] Boser, B. E.; Guyon, I. M.; Vapnik, V. N. “A training algorithm for optimal margin
classifiers.” Proceedings of the fifth annual workshop on Computational learning
theory – COLT ′92. 1992: 144.
[17] Yann LeCun, L. Bottou, Y. Bengio, and P Haffner, “Gradient-Based Learning
Applied to Document Recognition” proc of the IEEE, November 1998.
[18] C. Cortes, and V. Vapnik, “Support-vector networks”, Machine Learning. 1995, 20
(3): 273–297
[19] P. Smolensky, Chapter 6: Information Processing in Dynamical Systems:
Foundations of Harmony Theory. MIT Press. 1986: 194–281.
[20] G.E. Hinton, S. Osindero, and Y.W. Teh, “A fast learning algorithm for deep belief
nets”, Neural Computation 18, 1527–1554 (2006)
[21] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for
accurate object detection and semantic segmentation Tech report (v5)”, CVPR
2014, Computer Vision and Pattern Recognition, October 2014.
[22] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep
Convolutional Neural Networks”, Advances in neural information processing
systems 25(2), January 2012.
[23] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner and G. Monfardini, "The Graph
Neural Network Model," in IEEE Transactions on Neural Networks, vol. 20, no. 1,
pp. 61-80, Jan. 2009, doi: 10.1109/TNN.2008.2005605.
[24] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra, “DRAW: A
Recurrent Neural Network For Image Generation”, Computer Vision and Pattern
Recognition (cs.CV); Machine Learning (cs.LG); Neural and Evolutionary
Computing (cs.NE), May 2015.
[25] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu, “CNN-RNN: A Unified
Framework for Multi-label Image Classification”, 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), June 2016.
[26] W. Byeon, T. M. Breuel, F. Raue and M. Liwicki, "Scene labeling with LSTM
recurrent neural networks," 2015 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Boston, MA, 2015, pp. 3547-3555, doi:
10.1109/CVPR.2015.7298977.
[27] K. Simonyan, and A. Zisserman, “Very Deep Convolutional Networks for LargeScale Image Recognition”, Computer Vision and Pattern Recognition (cs.CV),
April 2015.
[28] https://medium.com/@chenchoulo/convolution-neural-network-cnn-175d924bfcc1
[29] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition”, Computer Vision and Pattern Recognition (cs.CV), December 2015.
[30] S, Hochreiter, and J. Schmidhuber, “LONG SHORT-TERM MEMORY”, Neural
Computation 9(8):1735-1780, November 1997.
[31] Olshausen, B. A. and Field, D. J. (1997). Sparse coding with an overcomplete basis
set: a strategy employed by V1? Vision Research, 37, 3311–3325.
[32] Y. Bengio, E. Thibodeau-Laufer, G. Alain, J. Yoinski, “Deep Generative Stochastic
Networks Trainable by Backprop”, arXiv preprint arXiv:1306.1091, June 2013.
[33] Y. Bengio, A. Courville, and P. Vincent, “Representation Learning: A Review and
New Perspectives”, arXiv:1206.5538 [cs.LG], April 2014.
[34] T. Wong, and Z. Luo, “Recurrent Auto-Encoder Model for Multidimensional Time
Series Representation”, ICLR 2018, January 2018.
[35] X. Wu, G. Jiang, X. Wang, P. Xie and X. Li, "A Multi-Level-Denoising
Autoencoder Approach for Wind Turbine Fault Detection," in IEEE Access, vol. 7,
pp. 59376-59387, 2019, doi: 10.1109/ACCESS.2019.2914731.
[36] J. Chen, S. Sathe, C. C. Aggarwal, and D. Turagea, “Outlier Detection with
Autoencoder Ensembles”, 2017 SIAM International Conference on Data Mining,
June 2017.
[37] J. Li, M. T. Luong, and D. Jurafsky, “A Hierarchical Neural Autoencoder for
Paragraphs and Documents”, arXiv:1506.01057 [cs.CL], June 2015.
[38] Li, Y., Wang, Z., Yang, X. et al. “Efficient convolutional hierarchical autoencoder
for human motion prediction.”, Vis Comput 35, 1143–1156 (2019).
https://doi.org/10.1007/s00371-019-01692-9, June 2019
[39] D. Bouchacourt, R. Tomioka, and S. Nowozin, “Multi-Level Variational
Autoencoder: Learning Disentangled Representations from Grouped
Observations”, The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18), May 2017.
[40] Y. C. Chen, W. C. Peng, and S. Y. Lee, “Efficient Algorithms for Influence
Maximization in Social Networks,” Knowledge and Information Systems, Vol. 33,
Issue 3, pp 577-601, 2012. [SCI, IF=2.397, 64/134]
[41] J. Shlens, “Notes on Kullback-Leibler Divergence and Likelihood Theory”,
arXiv:1404.2000, April 2014.
[42] https://tdhopper.com/blog/cross-entropy-and-kl-divergence
[43] T. Derr, C. Aggarwal, and J. Tang, “Signed Network Modeling Based on Structural
Balance Theory”, CIKM ′18: Proceedings of the 27th ACM International
Conference on Information and Knowledge Management, Pages 557–566 October
2018
[44] Chen, Y. A novel algorithm for mining opinion leaders in social networks. World
Wide Web 22, 1279–1295 (2019). https://doi.org/10.1007/s11280-018-0586-x |