參考文獻 |
[1] T. K. Aslanyan and F. Frasincar, 2021, “Utilizing textual reviews in latent factor models for recommender systems,” In Proceedings of the 36th Annual ACM Symposium on Applied Computing (SAC ′21), Association for Computing Machinery, New York, NY, USA, 1931–1940.
[2] L. J. Ba, R. Kiros, and G. E. Hinton, 2016, “Layer Normalization,” CoRR abs/1607.06450 (2016).
[3] M. Chen, Y. Bai, J. D. Lee, T. Zhao, H. Wang, C. Xiong, and R. Socher, 2020, “Towards understanding hierarchical learning: benefits of neural representations,” In Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS′20), Curran Associates Inc., Red Hook, NY, USA, Article 1856, 22134–22145.
[4] X. Chen, H. Xu, Y. Zhang, J. Tang, Y. Cao, Z. Qin, and H. Zha, 2018, “Sequential Recommendation with User Memory Networks,” In Proceedings of WSDM, ACM, 108–116.
[5] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, 2014, “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation,” In Proceedings of EMNLP, 1724–1734.
[6] J. Devlin, M. Chang, K. Lee, and K. Toutanova, 2019, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” In Proceedings of NAACL.
[7] T. Donkers, B. Loepp, and J. Ziegler, 2017, “Sequential User-based Recurrent Neural Network Recommendations,” In Proceedings of RecSys, 152–160.
[8] F. M. Harper and J. A. Konstan, 2015, “The MovieLens Datasets: History and Context,” ACM Trans, Interact. Intell. Syst. 5, 4, Article 19 (Dec. 2015), 19 pages.
[9] K. He, X. Zhang, S. Ren, and J. Sun, 2016, “Deep Residual Learning for Image Recognition,” In Proceedings of CVPR, IEEE, 770–778.
[10] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T. Chua, 2017, “Neural Collaborative Filtering,” In Proceedings of WWW, ACM, 173–182.
[11] D. Hendrycks and K. Gimpel, 2016, “Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units,” CoRR abs/1606.08415 (2016).
[12] B. Hidasi and A. Karatzoglou, 2018, “Recurrent Neural Networks with Top-k Gains for Session-based Recommendations,” In Proceedings of CIKM, ACM, 843–852.
[13] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, 2016, “Session-based Recommendations with Recurrent Neural Networks,” In Proceedings of ICLR.
[14] G. Hinton, O. Vinyals, and J. Dean, 2015, “Distilling the knowledge in a neural network,” In Deep Learning and Representation Learning Workshop.
[15] S. Hochreiter and J. Schmidhuber, 1997, “Long Short-Term Memory,” Neural Computation 9, 8 (Nov. 1997), 1735–1780.
[16] J. Huang, W. X. Zhao, H. Dou, J. Wen, and E. Y. Chang, 2018, “Improving Sequential Recommendation with Knowledge-Enhanced Memory Networks,” In Proceedings of SIGIR, ACM, 505–514.
[17] Y. Ji, A. Sun, J. Zhang, and C. Li, 2020, “A Re-visit of the Popularity Baseline in Recommender Systems,” In Proceedings of SIGIR, ACM, 1749–1752.
[18] S. Kabbur, X. Ning, and G. Karypis, 2013, “FISM: Factored Item Similarity Models for top-N Recommender Systems,” In Proceedings of KDD, ACM, 659–667.
[19] W. Kang and J. McAuley, 2018, “Self-Attentive Sequential Recommendation,” In Proceedings of ICDM, 197–206.
[20] D. P. Kingma and J. Ba, 2015, “Adam: A Method for Stochastic Optimization,” In Proceedings of ICLR.
[21] Y. Koren, 2008, “Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model,” In Proceedings of KDD, ACM, 426–434.
[22] Y. Koren and R. Bell, 2011, “Advances in Collaborative Filtering,” Recommender Systems Handbook, Springer US, Boston, MA, 145–186.
[23] Y. Koren, R. Bell, and C. Volinsky, 2009, “Matrix Factorization Techniques for Recommender Systems,” Computer 42, 8 (Aug. 2009), 30–37.
[24] J. Li, Z. Tu, B. Yang, M. R. Lyu, and T. Zhang, 2018, “Multi-Head Attention with Disagreement Regularization,” In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing(EMNLP), pages 2897–2903, Brussels, Belgium. Association for Computational Linguistics.
[25] J. Li, P. Ren, Z. Chen, Z. Ren, T. Lian, and J. Ma, 2017, “Neural Attentive Session-based Recommendation,” In Proceedings of CIKM, ACM, 1419–1428.
[26] J. Lian, Xi. Zhou, F. Zhang, Z. Chen, X. Xie and G. Sun, 2018, “XDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems,” In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ′18). Association for Computing Machinery, New York, NY, USA, 1754–1763.
[27] G. Linden, B. Smith, and J. York, 2003, “Amazon.Com Recommendations: Item-to-Item Collaborative Filtering,” IEEE Internet Computing 7, 1 (Jan. 2003), 76–80.
[28] J. Ni, J. Li, J. McAuley, 2019, “Justifying recommendations using distantly-labeled reviews and fined-grained aspects,” In Proceedings of Empirical Methods in Natural Language Processing (EMNLP).
[29] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, 2018, “Improving language understanding by generative pre-training,” In OpenAI Technical report.
[30] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, 2009, “BPR: Bayesian Personalized Ranking from Implicit Feedback,” In Proceedings of UAI, AUAI Press, Arlington, Virginia, United States, 452–461.
[31] S. Rendle, C. Freudenthaler, and L. Schmidt-Thieme, 2010, “Factorizing Personalized Markov Chains for Next-basket Recommendation,” In Proceedings of WWW, ACM, 811–820.
[32] R. Salakhutdinov and A. Mnih, 2007, “Probabilistic Matrix Factorization,” In Proceedings of NIPS, Curran Associates Inc., USA, 1257–1264.
[33] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, 2001, “Item-based Collaborative Filtering Recommendation Algorithms,” In Proceedings of WWW, ACM, 285–295.
[34] S. Sedhain, A. K. Menon, S. Sanner, and L. Xie, 2015, “AutoRec: Autoencoders Meet Collaborative Filtering,” In Proceedings of WWW, ACM, 111–112.
[35] G. Shani, D. Heckerman, and R. I. Brafman, 2005, “An MDP-Based Recommender System,” J. Mach. Learn. Res, 6 (Dec. 2005), 1265–1295.
[36] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, 2014, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” J. Mach. Learn. Res, 15, 1 (Jan. 2014), 1929–1958.
[37] F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang, 2019, “BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer,” In Proceedings of CIKM, ACM, 1441–1450.
[38] G. Tang, M. Müller, A. Rios, and R. Sennrich, 2018, “Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures,” In Proceedings of EMNLP. 4263–4272.
[39] J. Tang and K. Wang, 2018, “Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding,” In Proceedings of WSDM, 565–573.
[40] W. L. Taylor, 1953, “‘Cloze Procedure’: A New Tool for Measuring Readability,” Journalism Bulletin 30, 4 (1953), 415–433.
[41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, 2017, “Attention is All you Need,” In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS′17), Curran Associates Inc., Red Hook, NY, USA, 6000–6010.
[42] Y. Wu, C. DuBois, A. X. Zheng, and M. Ester, 2016, “Collaborative Denoising Auto-Encoders for Top-N Recommender Systems,” In Proceedings of WSDM, ACM, 153–162.
[43] M. D. Zeiler and R. Fergus, 2014, “Visualizing and understanding convolutional networks,” In Proceeding of ECCV. |