參考文獻 |
一、書籍
1. Chollet, F. (2018). Deep learning with Python.
2. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning: MIT press.
3. 張林忠. (2014). 分析師關鍵報告2:張林忠教你程式交易: 寰宇出版社.
二、英文會議論文(Conference Proceedings)
4. Chen, K., Zhou, Y., & Dai, F. (2015). A LSTM-based method for stock returns prediction: A case study of China stock market. Paper presented at the 2015 IEEE International Conference on Big Data (Big Data).
5. Collobert, R., & Weston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. Paper presented at the Proceedings of the 25th international conference on Machine learning.
6. Dauphin, Y. N., Fan, A., Auli, M., & Grangier, D. (2017). Language modeling with gated convolutional networks. Paper presented at the Proceedings of the 34th International Conference on Machine Learning-Volume 70.
7. El Hihi, S., & Bengio, Y. (1996). Hierarchical recurrent neural networks for long-term dependencies. Paper presented at the Advances in neural information processing systems.
8. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Paper presented at the Proceedings of the IEEE international conference on computer vision.
9. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.
10. Hermans, M., & Schrauwen, B. (2013). Training and analysing deep recurrent neural networks. Paper presented at the Advances in neural information processing systems.
11. Jozefowicz, R., Zaremba, W., & Sutskever, I. (2015). An empirical exploration of recurrent network architectures. Paper presented at the International Conference on Machine Learning.
12. Klambauer, G., Unterthiner, T., Mayr, A., & Hochreiter, S. (2017). Self-normalizing neural networks. Paper presented at the Advances in neural information processing systems.
13. Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.
14. Martens, J., & Sutskever, I. (2011). Learning recurrent neural networks with hessian-free optimization. Paper presented at the Proceedings of the 28th International Conference on Machine Learning (ICML-11).
15. Pascanu, R., Mikolov, T., & Bengio, Y. (2013). On the difficulty of training recurrent neural networks. Paper presented at the International conference on machine learning.
16. Santos, C. D., & Zadrozny, B. (2014). Learning character-level representations for part-of-speech tagging. Paper presented at the Proceedings of the 31st International Conference on Machine Learning (ICML-14).
17. Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Paper presented at the Advances in neural information processing systems.
18. Tsantekidis, A., Passalis, N., Tefas, A., Kanniainen, J., Gabbouj, M., & Iosifidis, A. (2017). Using deep learning to detect price change indications in financial markets. Paper presented at the 2017 25th European Signal Processing Conference (EUSIPCO).
三、英文期刊論文(Journal Article)
19. Bahdanau, D., Cho, K., & Bengio, Y. J. a. p. a. (2014). Neural machine translation by jointly learning to align and translate.
20. Bai, S., Kolter, J. Z., & Koltun, V. (2018). An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv:.01271.
21. Bao, W., Yue, J., & Rao, Y. (2017). A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PloS one, 12(7), e0180944.
22. Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2), 157-166.
23. Berge, T. (2014). Predicting Recessions with Leading Indicators: Model Averaging and Selection Over the Business Cycle. Journal of Forecasting.
24. Bottou, L., Soulie, F. F., Blanchet, P., & Liénard, J.-S. J. N. N. (1990). Speaker-independent isolated digit recognition: Multilayer perceptrons vs. dynamic time warping. 3(4), 453-465.
25. Cho, K., Van Merriënboer, B., Bahdanau, D., & Bengio, Y. J. a. p. a. (2014). On the properties of neural machine translation: Encoder-decoder approaches.
26. Di Persio, L., & Honchar, O. (2016). Artificial neural networks architectures for stock price prediction: Comparisons and applications. International Journal of Circuits, Systems and Signal Processing, 10, 403-413.
27. Graves, A. (2013). Generating sequences with recurrent neural networks. arXiv:.01271.
28. Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
29. Kalchbrenner, N., Espeholt, L., Simonyan, K., Oord, A. v. d., Graves, A., & Kavukcuoglu, K. J. a. p. a. (2016). Neural machine translation in linear time.
30. Kim, J., El-Khamy, M., & Lee, J. (2017). Residual LSTM: Design of a deep recurrent architecture for distant speech recognition. arXiv:.03360.
31. Kingma, D. P., & Ba, J. J. a. p. a. (2014). Adam: A method for stochastic optimization.
32. Koutnik, J., Greff, K., Gomez, F., & Schmidhuber, J. (2014). A clockwork rnn. arXiv:.01271.
33. LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. J. N. c. (1989). Backpropagation applied to handwritten zip code recognition. 1(4), 541-551.
34. Lei, T., Zhang, Y., & Artzi, Y. (2017). Training rnns as fast as cnns. arXiv:.02755.
35. Melis, G., Dyer, C., & Blunsom, P. (2017). On the state of the art of evaluation in neural language models. arXiv:.05589.
36. Merity, S., Keskar, N. S., & Socher, R. (2017). Regularizing and optimizing LSTM language models. arXiv:.02182.
37. Pradhan, S., & Longpre, S. (2016). Exploring the depths of recurrent neural networks with stochastic residual learning.
38. Prakash, A., Hasan, S. A., Lee, K., Datla, V., Qadir, A., Liu, J., & Farri, O. (2016). Neural paraphrase generation with stacked residual LSTM networks. arXiv:.03098.
39. Sejnowski, T. J., & Rosenberg, C. R. (1987). Parallel networks that learn to pronounce English text. Complex systems, 1(1), 145-168.
40. Serrà, J., Pascual, S., & Karatzoglou, A. (2018). Towards a universal neural network encoder for time series. Artif Intell Res Dev Curr Chall New Trends Appl, 308, 120.
41. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. J. T. J. o. M. L. R. (2014). Dropout: a simple way to prevent neural networks from overfitting. 15(1), 1929-1958.
42. Tu, S. (2017). Passive Market Share to Overtake Active in the US No Later than 2024,. Moody’s Investors Service.
43. Van Den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., . . . Kavukcuoglu, K. (2016). WaveNet: A generative model for raw audio. SSW, 125.
44. Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., Lang, K. J. J. B. T., Architectures, & Applications. (1995). Phoneme recognition using time-delay neural networks. 35-61.
45. Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., . . . Macherey, K. (2016). Google′s neural machine translation system: Bridging the gap between human and machine translation. arXiv:.08144.
四、中文文獻
46. 李映瑾. (2018). 指數股票型基金(ETF) 全球發展概況與可能影響. 台北外匯市場發展基金會委託計畫.
47. 董寶蘭. (2010). 程式交易策略實證研究-以投資 ETF0050 為例. 淡江大學管理科學研究所企業經營碩士在職專班學位論文, 1-65.
五、網路資料
48. Karpathy, A. (2019, 2019). A Recipe for Training Neural Networks. Retrieved from http://karpathy.github.io/2019/04/25/recipe/
49. LLC, G. Keras. Retrieved from https://www.tensorflow.org/guide/keras/
50. Ltd, S. R. (2019, 2019). S&P 500 Sector Weightings 1979 – 2019. Retrieved from http://siblisresearch.com/data/sp-500-sector-weightings/
51. Olah, C. (2015). Understanding LSTM Networks. Retrieved from http://colah.github.io/posts/2015-08-Understanding-LSTMs/
52. 凱衛資訊股份有限公司. 交易平台介紹. Retrieved from http://www.multicharts.com.tw/characteristic.aspx |