博碩士論文 109423040 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:133 、訪客IP:52.15.158.238
姓名 許書瑜(Shu-Yu Hsu)  查詢紙本館藏   畢業系所 資訊管理學系
論文名稱 雙向Transformer架構之序列化推薦系統
(Bidirectional Transformer on Sequential Recommendation)
相關論文
★ 台灣50走勢分析:以多重長短期記憶模型架構為基礎之預測★ 以多重遞迴歸神經網路模型為基礎之黃金價格預測分析
★ 增量學習用於工業4.0瑕疵檢測★ 遞回歸神經網路於電腦零組件銷售價格預測之研究
★ 長短期記憶神經網路於釣魚網站預測之研究★ 基於深度學習辨識跳頻信號之研究
★ Opinion Leader Discovery in Dynamic Social Networks★ 深度學習模型於工業4.0之機台虛擬量測應用
★ A Novel NMF-Based Movie Recommendation with Time Decay★ 以類別為基礎sequence-to-sequence模型之POI旅遊行程推薦
★ A DQN-Based Reinforcement Learning Model for Neural Network Architecture Search★ Neural Network Architecture Optimization Based on Virtual Reward Reinforcement Learning
★ 生成式對抗網路架構搜尋★ 以漸進式基因演算法實現神經網路架構搜尋最佳化
★ Enhanced Model Agnostic Meta Learning with Meta Gradient Memory★ 遞迴類神經網路結合先期工業廢水指標之股價預測研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-1-1以後開放)
摘要(中) 因為電子商務的逐漸普及,推薦系統成為使用者尋找所需商品的重要角色。推薦系統主要是預測並推薦下一項商品給使用者,並且準確率愈高愈好,然而因為使用者喜好一般來說都不是固定而是隨著時間變換的,所以如何依據使用者的過去行為去對使用者的動態改變的喜好進行建模就成為一項重要性和挑戰性兼具的課題。以前的方法大部分使用序列神經網路,將使用者從左到右的歷史交互關係編碼以進行推薦,然而這樣單向的架構會限制住使用者行為序列中隱藏表示的能力,而且在現實生活中一個人的行為並非井然有序的序列,因此雙向模型被提出,其中,採用完形填空任務有效地避免訊息洩漏。另外,我們認為商品的類別以及使用者類型的相似度越高,可能會喜好同一項商品的機率也會越高,因此如果我們能善用這些資訊勢必能增加推薦系統的準確率。所以本論文計畫透過雙向模型架構融合前後訊息,並加入了產品特徵及使用者的相關資訊,使得推薦系統能夠更加精準。透過在一些真實的數據集上進行的實驗結果也顯示,我們所提出的模型架構優於目前幾種較常被使用的推薦方法,證明了其推薦系統模型的實用性。
摘要(英) The popularity of e-commerce makes recommendation systems a necessary tool for users to find the commodities of desire. For recommender systems, modeling the user’s dynamic preferences based on historical behavior is important, but also challenging. The previous methods used sequential neural networks to encode the user’s left-to-right historical interaction relationship for the recommendation. However, the unidirectional architecture has limited capability to hide representations in user behavior sequences and rigidly ordered sequences were not realistic. Therefore, a bidirectional model is proposed, and the cloze is employed to efficiently train the model to avoid information leakage. In this paper, adopting the bidirectional model to integrate the left-and-right information, and add relevant information about item characteristics and users (BTSR), makes more accurate recommendations. We have conducted several experiments on four real datasets, empirical evidence shows that BTSR′s recommendation outperforms other state-of-the-art baseline models. Furthermore, demonstrate the practicability of the proposed model recommendation system.
關鍵字(中) ★ 深度學習
★ 推薦系統
★ 注意力機制
關鍵字(英) ★ Recommender systems
★ Deep learning
★ Transformer
★ Attention mechanism
論文目次 摘要 i
ABSTRACT ii
誌謝 iii
Table of contents iv
List of Figures v
List of Tables vi
1. INTRODUCTION 1
2. RELATED WORK 3
2.1 Conventional Recommendation 3
2.2 Sequential Recommendation 4
2.3 Attention Mechanism 5
3. PROPOSED METHOD 6
3.1 Model Architecture 7
3.2 Embedding Layer 8
3.3 Transformer Layer 9
3.4 Model Learning 14
4. PERFORMANCE EVALUATION 15
4.1 Experiments Setup 15
4.2 Overall Performance Comparison 19
4.3 Effectiveness of the characteristics of the items and user information 21
4.4 Ablation study 21
4.5 Case Study 25
4.6 Parameter Settings 25
5. CONCLUSION 25
REFERENCES 27
參考文獻 [1] T. K. Aslanyan and F. Frasincar, 2021, “Utilizing textual reviews in latent factor models for recommender systems,” In Proceedings of the 36th Annual ACM Symposium on Applied Computing (SAC ′21), Association for Computing Machinery, New York, NY, USA, 1931–1940.
[2] L. J. Ba, R. Kiros, and G. E. Hinton, 2016, “Layer Normalization,” CoRR abs/1607.06450 (2016).
[3] M. Chen, Y. Bai, J. D. Lee, T. Zhao, H. Wang, C. Xiong, and R. Socher, 2020, “Towards understanding hierarchical learning: benefits of neural representations,” In Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS′20), Curran Associates Inc., Red Hook, NY, USA, Article 1856, 22134–22145.
[4] X. Chen, H. Xu, Y. Zhang, J. Tang, Y. Cao, Z. Qin, and H. Zha, 2018, “Sequential Recommendation with User Memory Networks,” In Proceedings of WSDM, ACM, 108–116.
[5] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, 2014, “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation,” In Proceedings of EMNLP, 1724–1734.
[6] J. Devlin, M. Chang, K. Lee, and K. Toutanova, 2019, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” In Proceedings of NAACL.
[7] T. Donkers, B. Loepp, and J. Ziegler, 2017, “Sequential User-based Recurrent Neural Network Recommendations,” In Proceedings of RecSys, 152–160.
[8] F. M. Harper and J. A. Konstan, 2015, “The MovieLens Datasets: History and Context,” ACM Trans, Interact. Intell. Syst. 5, 4, Article 19 (Dec. 2015), 19 pages.
[9] K. He, X. Zhang, S. Ren, and J. Sun, 2016, “Deep Residual Learning for Image Recognition,” In Proceedings of CVPR, IEEE, 770–778.
[10] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T. Chua, 2017, “Neural Collaborative Filtering,” In Proceedings of WWW, ACM, 173–182.
[11] D. Hendrycks and K. Gimpel, 2016, “Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units,” CoRR abs/1606.08415 (2016).
[12] B. Hidasi and A. Karatzoglou, 2018, “Recurrent Neural Networks with Top-k Gains for Session-based Recommendations,” In Proceedings of CIKM, ACM, 843–852.
[13] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, 2016, “Session-based Recommendations with Recurrent Neural Networks,” In Proceedings of ICLR.
[14] G. Hinton, O. Vinyals, and J. Dean, 2015, “Distilling the knowledge in a neural network,” In Deep Learning and Representation Learning Workshop.
[15] S. Hochreiter and J. Schmidhuber, 1997, “Long Short-Term Memory,” Neural Computation 9, 8 (Nov. 1997), 1735–1780.
[16] J. Huang, W. X. Zhao, H. Dou, J. Wen, and E. Y. Chang, 2018, “Improving Sequential Recommendation with Knowledge-Enhanced Memory Networks,” In Proceedings of SIGIR, ACM, 505–514.
[17] Y. Ji, A. Sun, J. Zhang, and C. Li, 2020, “A Re-visit of the Popularity Baseline in Recommender Systems,” In Proceedings of SIGIR, ACM, 1749–1752.
[18] S. Kabbur, X. Ning, and G. Karypis, 2013, “FISM: Factored Item Similarity Models for top-N Recommender Systems,” In Proceedings of KDD, ACM, 659–667.
[19] W. Kang and J. McAuley, 2018, “Self-Attentive Sequential Recommendation,” In Proceedings of ICDM, 197–206.
[20] D. P. Kingma and J. Ba, 2015, “Adam: A Method for Stochastic Optimization,” In Proceedings of ICLR.
[21] Y. Koren, 2008, “Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model,” In Proceedings of KDD, ACM, 426–434.
[22] Y. Koren and R. Bell, 2011, “Advances in Collaborative Filtering,” Recommender Systems Handbook, Springer US, Boston, MA, 145–186.
[23] Y. Koren, R. Bell, and C. Volinsky, 2009, “Matrix Factorization Techniques for Recommender Systems,” Computer 42, 8 (Aug. 2009), 30–37.
[24] J. Li, Z. Tu, B. Yang, M. R. Lyu, and T. Zhang, 2018, “Multi-Head Attention with Disagreement Regularization,” In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing(EMNLP), pages 2897–2903, Brussels, Belgium. Association for Computational Linguistics.
[25] J. Li, P. Ren, Z. Chen, Z. Ren, T. Lian, and J. Ma, 2017, “Neural Attentive Session-based Recommendation,” In Proceedings of CIKM, ACM, 1419–1428.
[26] J. Lian, Xi. Zhou, F. Zhang, Z. Chen, X. Xie and G. Sun, 2018, “XDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems,” In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ′18). Association for Computing Machinery, New York, NY, USA, 1754–1763.
[27] G. Linden, B. Smith, and J. York, 2003, “Amazon.Com Recommendations: Item-to-Item Collaborative Filtering,” IEEE Internet Computing 7, 1 (Jan. 2003), 76–80.
[28] J. Ni, J. Li, J. McAuley, 2019, “Justifying recommendations using distantly-labeled reviews and fined-grained aspects,” In Proceedings of Empirical Methods in Natural Language Processing (EMNLP).
[29] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, 2018, “Improving language understanding by generative pre-training,” In OpenAI Technical report.
[30] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, 2009, “BPR: Bayesian Personalized Ranking from Implicit Feedback,” In Proceedings of UAI, AUAI Press, Arlington, Virginia, United States, 452–461.
[31] S. Rendle, C. Freudenthaler, and L. Schmidt-Thieme, 2010, “Factorizing Personalized Markov Chains for Next-basket Recommendation,” In Proceedings of WWW, ACM, 811–820.
[32] R. Salakhutdinov and A. Mnih, 2007, “Probabilistic Matrix Factorization,” In Proceedings of NIPS, Curran Associates Inc., USA, 1257–1264.
[33] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, 2001, “Item-based Collaborative Filtering Recommendation Algorithms,” In Proceedings of WWW, ACM, 285–295.
[34] S. Sedhain, A. K. Menon, S. Sanner, and L. Xie, 2015, “AutoRec: Autoencoders Meet Collaborative Filtering,” In Proceedings of WWW, ACM, 111–112.
[35] G. Shani, D. Heckerman, and R. I. Brafman, 2005, “An MDP-Based Recommender System,” J. Mach. Learn. Res, 6 (Dec. 2005), 1265–1295.
[36] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, 2014, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” J. Mach. Learn. Res, 15, 1 (Jan. 2014), 1929–1958.
[37] F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang, 2019, “BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer,” In Proceedings of CIKM, ACM, 1441–1450.
[38] G. Tang, M. Müller, A. Rios, and R. Sennrich, 2018, “Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures,” In Proceedings of EMNLP. 4263–4272.
[39] J. Tang and K. Wang, 2018, “Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding,” In Proceedings of WSDM, 565–573.
[40] W. L. Taylor, 1953, “‘Cloze Procedure’: A New Tool for Measuring Readability,” Journalism Bulletin 30, 4 (1953), 415–433.
[41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, 2017, “Attention is All you Need,” In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS′17), Curran Associates Inc., Red Hook, NY, USA, 6000–6010.
[42] Y. Wu, C. DuBois, A. X. Zheng, and M. Ester, 2016, “Collaborative Denoising Auto-Encoders for Top-N Recommender Systems,” In Proceedings of WSDM, ACM, 153–162.
[43] M. D. Zeiler and R. Fergus, 2014, “Visualizing and understanding convolutional networks,” In Proceeding of ECCV.
指導教授 陳以錚(Yi-Jeng Chen) 審核日期 2023-2-3
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明