參考文獻 |
[1] L. E. Jeffrey, "Finding structure in time," Cognitive Science, vol. 14, no. 2, pp. 179-211, 1990, doi: https://doi.org/10.1016/0364-0213(90)90002-E.
[2] Y. Bengio, P. Simard, and P. Frasconi, "Learning long-term dependencies with gradient descent is difficult," IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157-166, March , 1994, doi: 10.1109/72.279181 , ISSN= 1941-0093.
[3] R. Pascanu, T. Mikolov, and Y. Bengio, "On the difficulty of training recurrent neural networks," presented at the Proceedings of the 30th International Conference on Machine Learning, Proceedings of Machine Learning Research, 2013. [Online]. Available: https://proceedings.mlr.press/v28/pascanu13.html.
[4] S. Hochreiter and J. Schmidhuber, "Long Short-Term Memory," Neural Computation, vol. 9, no. 8, pp. 1735-1780, Nov , 1997, doi: 10.1162/neco.1997.9.8.1735 , ISSN= 0899-7667.
[5] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," arXiv preprint arXiv:1412.3555, 2014.
[6] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
[7] G. E. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, Time series analysis: forecasting and control. John Wiley & Sons, 2015.
[8] E. S. Gardner Jr, "Exponential smoothing: The state of the art," Journal of forecasting, vol. 4, no. 1, pp. 1-28, 1985.
[9] E. S. Gardner Jr, "Exponential smoothing: The state of the art—Part II," International journal of forecasting, vol. 22, no. 4, pp. 637-666, 2006.
[10] R. B. Cleveland, W. S. Cleveland, J. E. McRae, and I. Terpenning, "STL: A seasonal-trend decomposition," J. Off. Stat, vol. 6, no. 1, pp. 3-73, 1990.
[11] E. B. Dagum and S. Bianconcini, Seasonal adjustment methods and real time trend-cycle estimation. Springer, 2016.
[12] C. Meek, D. M. Chickering, and D. Heckerman, "Autoregressive tree models for time-series analysis," in Proceedings of the 2002 SIAM International Conference on Data Mining, 2002: SIAM, pp. 229-244.
[13] K.-j. Kim, "Financial time series forecasting using support vector machines," Neurocomputing, vol. 55, no. 1-2, pp. 307-319, 2003.
[14] N. I. Sapankevych and R. Sankar, "Time series prediction using support vector machines: a survey," IEEE computational intelligence magazine, vol. 4, no. 2, pp. 24-38, 2009.
[15] H. Tyralis and G. Papacharalampous, "Variable selection in time series forecasting using random forests," Algorithms, vol. 10, no. 4, p. 114, 2017.
[16] D. Salinas, V. Flunkert, J. Gasthaus, and T. Januschowski, "DeepAR: Probabilistic forecasting with autoregressive recurrent networks," International Journal of Forecasting, vol. 36, no. 3, pp. 1181-1191, 2020.
[17] A. v. d. Oord et al., "Wavenet: A generative model for raw audio," arXiv preprint arXiv:1609.03499, 2016.
[18] S. Bai, J. Z. Kolter, and V. Koltun, "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling," arXiv preprint arXiv:1803.01271, 2018.
[19] G. Lai, W.-C. Chang, Y. Yang, and H. Liu, "Modeling long-and short-term temporal patterns with deep neural networks," in The 41st international ACM SIGIR conference on research & development in information retrieval, 2018, pp. 95-104.
[20] J. Cheng, K. Huang, and Z. Zheng, "Towards better forecasting by fusing near and distant future visions," in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, no. 04, pp. 3593-3600.
[21] S. Li et al., "Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting," Advances in neural information processing systems, vol. 32, 2019.
[22] H. Zhou et al., "Informer: Beyond efficient transformer for long sequence time-series forecasting," in Proceedings of the AAAI conference on artificial intelligence, 2021, vol. 35, no. 12, pp. 11106-11115.
[23] H. Wu, J. Xu, J. Wang, and M. Long, "Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting," Advances in Neural Information Processing Systems, vol. 34, pp. 22419-22430, 2021.
[24] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Communications of the ACM, vol. 60, no. 6, pp. 84-90, 2017.
[25] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[26] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[27] C. Szegedy et al., "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
[28] C. Olah. "Understanding LSTM Networks." https://colah.github.io/posts/2015-08-Understanding-LSTMs/ (accessed.
[29] J. L. Ba, J. R. Kiros, and G. E. Hinton, "Layer normalization," arXiv preprint arXiv:1607.06450, 2016.
[30] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in International conference on machine learning, 2015: pmlr, pp. 448-456.
[31] Y. Wu and K. He, "Group normalization," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19.
[32] A. Trindade. ElectricityLoadDiagrams20112014, doi: https://doi.org/10.24432/C58C86.
[33] C. C. D. o. Transportation. Caltrans Performance Measurement System. [Online]. Available: https://pems.dot.ca.gov/ |