博碩士論文 108522005 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:20 、訪客IP:18.191.13.255
姓名 江明勳(Ming-Shiun Jiang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱
(A Real-time Embedding Increasing for Session-based Recommendation with Graph Neural Networks)
相關論文
★ 基於主診斷的訓練目標修改用於出院病摘之十代國際疾病分類任務★ 混合式心臟疾病危險因子與其病程辨識 於電子病歷之研究
★ 基於 PowerDesigner 規範需求分析產出之快速導入方法★ 社群論壇之問題檢索
★ 非監督式歷史文本事件類型識別──以《明實錄》中之衛所事件為例★ 應用自然語言處理技術分析文學小說角色 之關係:以互動視覺化呈現
★ 基於生醫文本擷取功能性層級之生物學表徵語言敘述:由主成分分析發想之K近鄰算法★ 基於分類系統建立文章表示向量應用於跨語言線上百科連結
★ Code-Mixing Language Model for Sentiment Analysis in Code-Mixing Data★ 藉由加入多重語音辨識結果來改善對話狀態追蹤
★ 對話系統應用於中文線上客服助理:以電信領域為例★ 應用遞歸神經網路於適當的時機回答問題
★ 使用多任務學習改善使用者意圖分類★ 使用轉移學習來改進針對命名實體音譯的樞軸語言方法
★ 基於歷史資訊向量與主題專精程度向量應用於尋找社群問答網站中專家★ 使用YMCL模型改善使用者意圖分類成效
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 隨著機器學習研究的不斷進步,在沒有大量複雜數據的情況下獲得良好的性能,有時比要求模型從大量數據中獲得良好的性能更重要。在推薦系統領域,用有限的數據挖掘用戶的興趣是熱門的研究方向之一。

基於會話圖神經網路是一種非常流行的推薦模型,它只需要簡單的用戶瀏覽記錄就可以做出很好的推薦,但是這種模型通常有一個明顯的缺點,它不能對模型在訓練階段沒有見過的未知項目執行任何操作,就算它不是冷啟動項目也一樣。這在實際應用中是一個大問題,機器不太可能重複訓練大型模型,會消耗大量資源。

為了解決這個問題,本文提出了一種新穎的可控式添加方法,可以在不影響原始性能的情況下盡可能地添加有用的表示。在許多真實世界數據集上進行的大量實驗表明了我們方法的有效性和靈活性,並且它也有機會和潛力用於其他模型或其他任務。
摘要(英) As the research of machine learning continues to progress, achieving good performance without a large amount of complicated data is prioritized over asking the model to reach a good performance from huge data. In the field of recommendation systems, digging out users′ interests with limited data is one of the popular research directions.

Session-based recommendations with Graph Neural Networks is a very trendy model, it can make a good recommendation with only simple user browsing records, however, this kind of model usually has an obvious disadvantage, it can not perform any actions on an unknown item which model have not seen during the training phase, even though it is not a cold start item. This is a big problem in practical applications, machines are unlikely to train the large model repeatly, at it will consume a lot of resources.

To solve this problem, a novel controllable addition method is proposed, the useful representations can be added without affecting the original performance as much as possible. Extensive experiments conducted on many real-world datasets show the effectiveness and flexibility of our method, and it also has the opportunity and potential to be used in other models or other tasks.
關鍵字(中) ★ 機器學習
★ 推薦系統
★ 圖神經網路
關鍵字(英) ★ machine learning
★ recommendation system
★ Graph Neural Networks
★ session
★ unknown item
論文目次 中文摘要i
英文摘要ii
目錄 iii
圖目錄 iv
表目錄 vi
一、 Introdution 1
1.1 Background 1
1.1.1 Content-based Recommendation System 2
1.1.2 Collaborative Filtering Recommendation System 3
1.1.3 Hybrid Recommendation System 4
1.2 Motivation 5

二、 Related work 7
2.1 GraphSAGE 8
2.2 PinSAGE 11
2.2.1 Convolve 11
2.2.2 Minibatch 12
2.3 Our approximate method 13

三、 Method 15
3.1 Baseline 15
3.2 The Proposed Method 18
3.2.1 Pseudo Inverse Approximation 18
3.2.2 Sample Softmax Loss 21
四、 Experiment and Analysis 23
4.1 Dataset 23
4.2 Evaluation Metrics 24
4.3 Parameter Setup 25
4.4 Comparison with Dierent Datasets 25
4.5 Comparison with Dierent Methods in Multi-Domain Perspective 27
4.6 Analysis of Hyperparameter α 28
4.7 Comparison with Dierent Number of Calculations 29

五、 Discussion 31
5.1 Candidate Item Pool 31
5.2 The impact of Mean Reciprocal Rank 31
5.3 Training Time and Updating Time 32

六、 Conclusion 34

七、 Improvements and Extentions 35

索引 36

參考文獻 37
參考文獻 [1] G. Adomavicius, A. Tuzhilin.
Towards the Next Generation ofRecommender Systems.
In IEEE, 2005.

[2] Pasquale Lops, Marco de Gemmis, Giovanni Semeraro.
Contentbased Recommender Systems: State of the Art and Trends.
In book: Recommender Systems Handbook (pp.73-105), 2011.

[3] David Goldberg, David Nichols, Brian M. Oki, Douglas Terry.
Using collaborative ltering to weave an information tapestry.
In ACM, 1992.

[4] J. Ben Schafer, Dan Frankowski, Jon Herlocker, Shilad Sen.
Collaborative Filtering Recommender Systems.
In LNCS, 2007.

[5] Kai Yu, Anton Schwaighofer, Volker Tresp, Xiaowei Xu, Hans-Peter Kriegel.
Probabilistic Memory-based Collaborative Filtering.
In IEEE, 2004.

[6] Shuai Zhang, Lina Yao, Aixin Sun, Yi Tay.
Deep Learning based Recommender System.
In ACM, 2017.

[7] Badrul Sarwar, George Karypis, Joseph Konstan, John Riedl.
Item-Based Collaborative Filtering Recommendation Algorithms.
In WWW, 2001.

[8] Sunil Arya, David M. Mount, Nathan S. Netanyahu, Ruth Silverman, Angela Y. Wu.
An Optimal Algorithm for ApproximateNearest Neighbor Searching in Fixed Dimensions.
In ACM, 1998.

[9] Rodgers, J. L.; Nicewander, W. A.
Thirteen ways to look at the correlation coecient.
In American Statistician, 1998.

[10] P.N. Tan, M. Steinbach, V. Kuma.
Introduction to Data Mining.
In ISBN, 2005.

[11] Robin Burke.
Hybrid Recommender Systems: Survey and Experiments.
In UMUAI, 2002.

[12] Simen Eide, Ning Zhou.
Deep neural network marketplace recommenders in online experiments.
In RecSys, 2018.

[13] G. Linden, B. Smith, J. York.
Amazon.com recommendations:item-to-item collaborative ltering.
In IEEE, 2003.

[14] Oren Barkan, Noam Koenigstein.
Item2Vec Neural Item Embedding for Collaborative Filtering.
In RecSys, 2016.

[15] Tomas Mikolov, Kai Chen, Greg Corrado, Jerey Dean.
Ecient Estimation of Word Representations in Vector Spaces.
In ICLR, 2013.

[16] Gerard Salton, Michael J. McGill.
Introduction to modern information retrieval.
In SIGIR, 1983.

[17] Page, Lawrence, Brin, Sergey, Motwani, Rajeev,Winograd, Terry.
The PageRank Citation Ranking: Bringing Order to the Web.
In WWW, 1999.

[18] Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan.
Session-based Recommendation with Graph Neural Networks.
In AAAI, 2019.

[19] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu.
A Comprehensive Survey on Graph Neural Networks.
In IEEE, 2018.

[20] Vapnik, Vladimir.
Estimation of Dependences Based on Empirical Data.
In ISSN, 2006.

[21] W. L. Hamilton, R. Ying, and J. Leskovec.
Inductive Representation Learning on Large Graphs.
In NIPS, 2017.
[22] Sepp Hochreiter, Jurgen Schmidhuber.
LONG SHORT TERM MEMORY.
In Neural Computation, 1997.

[23] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai,William L. Hamilton, Jure Leskovec.
Graph Convolutional Neural Networks for Web-Scale Recommender Systems.
In KDD, 2018.

[24] Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger.
Densely Connected Convolutional Networks.
In IEEE, 2017.

[25] Thomas N. Kipf, Max Welling.
Semi-Supervised Classication with Graph Convolutional.
In ICLR, 2016.

[26] Yujia Li, Daniel Tarlow, Marc Brockschmidt, Richard Zemel.
Gated Graph Sequence Neural Networks.
In ICLR, 2016.

[27] Rubinstein, R.Y, Kroese, D.P.
The Cross-Entropy Method: A Unied Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning.
In ICSBM, 2004.

[28] Huseyin Tevk Pasha.
Linear Algebra.
In 1882.

[29] Rubinstein, R.Y, Kroese, D.P.
Advanced Robotics: Redundancy and Optimization.
In ISB0, 1991.

[30] Yoshihiko Nakamura.
The Cross-Entropy Method: A Unied Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning.
In ICSBM, 2004.
指導教授 蔡宗翰(Tzong-Han Tsai) 審核日期 2022-6-29
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明