博碩士論文 104582604 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:15 、訪客IP:3.139.79.222
姓名 佳樂恩(Chalothon Chootong)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於職業技能和教育視訊之學習內容生成與總結方法
(Learning Content Generation and Summarization based on Industrial Skills and Educational Videos)
相關論文
★ 基於注意力之用於物件定位的語義分割方法★ 基於圖卷積網路的自動門檢測
★ 以多模態時空域建模的深度學習方法分類影像中的動態模式
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 資訊與通信科技(ICT)領域產業的需要的知識技能大量增加。由於資訊產業工作的內容廣泛,若要教導學生全面的技能使之可以用於各種不同產業的工作是不太可能的。學生在大學所學的課程內容與產業需求也就存在著落差的議題,因此,我們提出此學習內容推薦系統(LCRec),讓學生可以依據工作需要的技能去媒合課程內容。本研究的第一個目的是提供一個整合IT相關工作所需技能、維基百科、2013年計算機科學課程單元 (CS2013)的技能手冊。我們透過公開的求職網站進行產業需求的調查,調查結果也可以讓大學有效的暸解產業所需的技能,並進一步去強化相關技能課程的學習內容。我們透過專家學者及學生對我們提出的學習內容推薦系統(LCRec)實用性進行實驗並分析回饋結果。研究結果表明,本系統可以有效的減少大學與產業間的學用落差(即學習內容不足)

現今學習的內容的來源不只僅是書籍,還包含視頻、部落格、網頁等。值得注意的是教育視頻已是人們獲取新知的重要媒體。然而許多視頻都缺少內容的描述。 本研究的第二目標為,提出一自動字幕摘要機制。基於多重注意力機制,透過卷積神經網絡 (CNN) 和雙向長短期記憶 (Bi-LSTM) 神經網絡之混和模型,推測句子的關鍵信息。在實驗階段中,文檔中的每個字幕句子,都被分配到一個顯著分數,並依據句子之特徵及其對應之分數作訓練學習,進而產生視頻摘要。此外,本研究於實驗階段,蒐集DUC2002 和 CNN/Daily Mail文本文檔數據集並進行訓練,測試我們模型的性能。由 ROUGE 度量來評估生成的摘要,取得 95% 的置信區間之實驗結果可以明顯發現,我們的模型在 ROUGE-1、ROUGE-2 和 ROUGE-L 分數上優於其他先進的模型
摘要(英) Knowledge skills in Information Communication Technology (ICT) industry always emerge. With the wide variety of jobs available, it is unlikely to educate students who have all skills to match every job requirement. This issue strongly indicates gaps between what is taught in the university and what the industry needs. Therefore, we propose the Learning Content Recommender (LCRec) for students to find appropriate learning contents based on required job-skills. The first purpose of this research is to provide skill books that are hybridization on IT job-skills, Wikipedia, and Knowledge Units from the Computer Science Curriculum 2013 (CS2013). Skills from publicly available job searching websites are used to investigate what the industry needs. Moreover, it is also useful for academics to look at the skills needed in industries and consider enhancing the curriculum with new skills. We carried experiments and analyzed the feedback among professionals, academics, and students to test the usefulness of LCRec. The study result demonstrated that it is possible to bridge the gap (what learning contents are lacking) between the academia and the industry.

The learning content is not only books but also includes videos, blogs, webpage, etc. Notably, the educational video is an essential material for people to update new knowledge. However, many videos lack the description that might consume the user time to get a suitable video and gain the core content of the video. This is an inspiration to study an automatic subtitle summarization. For the second goal, we introduce a novel multiple attention mechanism for subtitle summarization. Both advantages from Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory (Bi-LSTM) Networks are utilized to capture the critical information of the sentence. Each sentence in the subtitle document is assigned a salient score and then video summaries are produced based on sentence feature and its score. The experiments are conducted on both subtitle documents from educational videos and text documents. Besides, we experiment on two well-known text document datasets, DUC2002, and CNN/Daily Mail, to test the performance of our model. We utilize ROUGE measures for evaluating the generated summaries at 95% confidence intervals. The experimental results demonstrated that our model outperforms the baseline and state-of-the-art models on the ROUGE-1, ROUGE- 2, and ROUGE-L scores.
關鍵字(中) ★ 學習內容推薦
★ 知識管理
★ 自動生成內容
★ 計算機科學課程2013
★ 教育視頻摘要
★ 提取總結
★ 字幕總結
★ 集成CNN-LSTM
★ 多注意力機制
★ 教育視頻摘要
關鍵字(英) ★ Learning Content Recommendation
★ Knowledge Management
★ Auto Generated Contents
★ Computer Science Curriculum 2013
★ Lecture Video Summarization
★ Extractive Summarization
★ Subtitle Summarization
★ Integration CNN-LSTM
★ Multiple Attention Mechanism
★ Educational Video
論文目次 Content
Abstract i
摘要 iii
Acknowledgement v
Table of Contents vi
List of Figures ix
List of Tables xi
Chapter 1. Introduction 1
1.1 Backgrond 1
1.2 Motivation 2
1.2.1 Educational Gap 2
1.2.2 Lack of Materials Description 3
1.3 Dissertation Organization 4
Chapter 2. Literature Review 5
2.1 Educational Recommended Systems 5
2.1.1 Learning Content Recommendation 5
2.1.2 Book Recommendation 6
2.2 The Computer Science Curriculum (CS2013) 8
2.3 The Gab between Academia and ICT Industry 9
2.4 Contents Summarization 10
2.4.1 Text Summarization 11
2.4.2 Speech Summarization 17
2.4.3 Subtitle and Scripts Summarization 17
2.5 Neural Word Embeddings 18
2.6 Deep Learning to Summarize Documents 22
2.7 Attention Mechanism for Natural Language Processing (NLP) 22
Chapter 3. Learning Content Recommendation System 25
3.1 System Overview of Wiki-based Skill Book 25
3.2 Data Collection 27
3.2.1 Job Data 27
3.2.2 Skill Data 29
3.3 System Functionalities 30
3.3.1 Job Position Recommendation 30
3.3.2 Job-Skill and Knowledge Units Matching 31
3.3.3 Creating a List of Content Topics and Skill Book Generation 34
3.4 Wikipedia 40
3.4.1 Creating the Skill Dictionary 40
3.4.2 Content Table Creation 41
3.4.3 Learning Contents Generation 42
Chapter 4. Educational Video Summarization 44
4.1 Pre-trained Process 45
4.2 The Sentence Salience Score Prediction Module 46
4.2.1 Sentence Feature Extraction WACNNs Based 46
4.2.2 Sentence Context Capturing (AttBi-LSTM) 48
4.2.3 Sentence Score Prediction 49
4.3 Summary Generation 50
4.4 Data Sets 51
4.4.1 TED.COM 51
4.4.2 YouTube Channel 52
4.2.3 DUC2002 and CNN/Daily Mail 54
Chapter 5. Experimental Results of LCRec System 55
5.1 System Evaluation 57
5.2 LCRec Content Diversity 60
5.3 LCRec’s Discussion 64
Chapter 6. Experimental Results of Educational Video Summarization 66
6.1 Experimental Metrics 66
6.1.1 Recall-Oriented Understudy for Gisting Evaluation (ROUGE) 66
6.1.2 Salience Score 67
6.2 Influence of the Attention Mechanism 68
6.3 Overall Experiments 69
6.3.1 Experiment on Subtitle Document 69
6.3.2 Experiment on Text Documents 71
6.4 Limitations of The Proposed Model 73
Chapter 7. Discussion and Conclusions 75
7.1 LCRec System 75
7.2 Educational Video Summarization 76
7.3 Future Works 77
References 78
Appendices 88
參考文獻 [1] C. S. Nair, A. Patil, and P. Mertova, “Re-engineering graduate skills – a case study,” Eur. J. Eng. Educ., vol. 34, no. 2, pp. 131–139, May 2009.
[2] P. K. Tulsi and M. P. Poonia, “Expectations of Industry from Technical Graduates: Implications for Curriculum and Instructional Processes,” J. Eng. Educ. Transform., vol. 28, no. 4, pp. 19–24, 2015.
[3] Y. J. Kumar, O. S. Goh, H. Basiron, N. H. Choon, and P. C. Suppiah, “A review on automatic text summarization approaches,” J. Comput. Sci., vol. 12, no. 4, pp. 178–190, 2016.
[4] M. Yousefi-azar and L. Hamey, “Text summarization using unsupervised deep learning,” Expert Syst. Appl., vol. 68, no. October, pp. 93–105, 2017.
[5] H. Rashidghalam, M. Taherkhani, and F. Mahmoudi, “Text summarization using concept graph and BabelNet knowledge base,” 2016 Artif. Intell. Robot. IRANOPEN 2016, pp. 115–119, 2016.
[6] G. PadmaPriya and K. Duraiswamy, “An approach for text summarization using deep learning algorithm,” J. Comput. Sci., vol. 10, no. 1, pp. 1–9, 2014.
[7] S. Yan and X. Wan, “Deep Dependency Substructure-Based Learning for Multidocument Summarization,” ACM Trans. Inf. Syst., vol. 34, no. 1, pp. 1–24, Jul. 2015.
[8] J. G. Boticario, “Modeling recommendations for the educational domain,” Procedia Comput. Sci., vol. 1, no. 2, pp. 2793–2800, Jan. 2010.
[9] P. Lops, M. de Gemmis, and G. Semeraro, “Content-based Recommender Systems: State of the Art and Trends,” in Recommender Systems Handbook, Boston, MA: Springer US, 2011, pp. 73–105.
[10] T. Y. Tang and G. Mccalla, “Smart Recommendation for an Evolving E-Learning System,” in In Workshop on Technologies for Electronic Documents for Supporting Learning, International Conference on Artificial Intelligence in Education (AIED 2003). Sydney: International Conference on AI In Education, 2003.
[11] F. Mödritscher, “Towards a recommender strategy for personal learning environments,” Procedia Comput. Sci., vol. 1, no. 2, pp. 2775–2782, Jan. 2010.
[12] T. Chen, W.-L. Han, H.-D. Wang, Y.-X. Zhou, B. Xu, and B.-Y. Zang, “Content Recommendation System Based on Private Dynamic User Profile,” in 2007 International Conference on Machine Learning and Cybernetics, 2007, pp. 2112–2118.
[13] A. Klašnja-Milićević, B. Vesin, M. Ivanović, and Z. Budimac, “E-Learning personalization based on hybrid recommendation strategy and learning style identification,” Comput. Educ., vol. 56, no. 3, pp. 885–899, Apr. 2011.
[14] S. Kanetkar, A. Nayak, S. Swamy, and G. Bhatia, “Web-based personalized hybrid book recommendation system,” in 2014 International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), 2014, pp. 1–5.
[15] U. Rey Juan Carlos and U. Móstoles, “Prototype of content-based recommender system in an educational social network,” in In 1st Workshop on Video based Learning, 2011.
[16] O. De Clercq, M. Schuhmacher, S. P. Ponzetto, and V. Hoste, “Exploiting FrameNet for Content-Based Book Recommendation,” in Proc. of the 1st Workshop on New Trends in Content-based Recommender Systems co-located with the 8th ACM Conference on Recommender Systems : CBRecSys at ACM RecSys, 2014, pp. 14–21.
[17] C. Chen, L. Zhang, H. Qiao, S. Wang, Y. Liu, and X. Qiu, “Book Recommendation Based on Book-Loan Logs,” Springer, Berlin, Heidelberg, 2012, pp. 269–278.
[18] P. Mathew, B. Kuriakose, and V. Hegde, “Book Recommendation System through content based and collaborative filtering method,” in 2016 International Conference on Data Mining and Advanced Computing (SAPIENCE), 2016, pp. 47–52.
[19] A. S. Tewari, A. Kumar, and A. G. Barman, “Book recommendation system based on combine features of content based filtering, collaborative filtering and association rule mining,” in 2014 IEEE International Advance Computing Conference (IACC), 2014, pp. 500–503.
[20] P. Jomsri, “Book recommendation system for digital library based on user profiles by using association rule,” in Fourth edition of the International Conference on the Innovative Computing Technology (INTECH 2014), 2014, pp. 130–134.
[21] A. S. Tewari, T. S. Ansari, and A. G. Barman, “Opinion based book recommendation using Naive Bayes classifier,” in 2014 International Conference on Contemporary Computing and Informatics (IC3I), 2014, pp. 139–144.
[22] L. Xin, J. Song, M. Song, and J. Tong, “Book Recommendation Based on Community Detection,” in Pervasive Computing and the Networked World. ICPCA/SWS 2013. Lecture Notes in Computer Science, 2014, vol. 8351, pp. 364–373.
[23] N. Pukkhem and W. Vatanawood, “Personalised learning object based on multi-agent model and learners’ learning styles,” Maejo Int. J. Sci. Technol., vol. 5, no. September 2011, pp. 292–311, 2011.
[24] L. Shen and R. Shen, “Learning Content Recommendation Service Based-on Simple Sequencing Specification,” Springer, Berlin, Heidelberg, 2004, pp. 363–370.
[25] H. Imran, M. Belghis-Zadeh, T.-W. Chang, Kinshuk, and S. Graf, “PLORS: a personalized learning object recommender system,” Vietnam J. Comput. Sci., vol. 3, no. 1, pp. 3–13, Feb. 2016.
[26] R. Muthyala, S. Wood, Y. Jin, Y. Qin, H. Gao, and A. Rai, “Data-driven Job Search Engine Using Skills and Company Attribute Filters,” in 2017 IEEE International Conference on Data Mining, 2017.
[27] “Computer Science Curricula 2013 Curriculum Guidelines for Undergraduate Degree Programs in Computer Science The Joint Task Force on Computing Curricula Association for Computing Machinery (ACM) IEEE Computer Society A Cooperative Project of,” 2013.
[28] S. Roach, M. Sahami, R. LeBlanc, and R. Seker, “Special session: The CS2013 Computer Science curriculum guidelines project,” in 2013 IEEE Frontiers in Education Conference (FIE), 2013, pp. 1311–1313.
[29] R. Martin, B. Maytham, J. Case, and D. Fraser, “Engineering graduates’ perceptions of how well they were prepared for work in industry,” Eur. J. Eng. Educ., vol. 30, no. 2, pp. 167–180, May 2005.
[30] A. Radermacher and G. Walia, “Gaps between industry expectations and the abilities of graduates,” in Proceeding of the 44th ACM technical symposium on Computer science education - SIGCSE ’13, 2013, p. 525.
[31] A. Fiori, Innovative document summarization techniques : revolutionizing knowledge understanding. Hershey, PA, USA: IGI Global, 2014.
[32] C.-Y. Lin and E. Hovy, “The automated acquisition of topic signatures for text summarization,” in The 18th International Conference on Computational Linguistics, 2000.
[33] R. Mihalcea, “Graph-based Ranking Algorithms for Sentence Extraction, Applied to Text Summarization.”, Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, July, DOI: https://doi.org/10.3115/1219044.1219064, 2004
[34] A. Dode and S. Hasani, “PageRank Algorithm,” IOSR J. Comput. Eng., vol. 19, no. 01, pp. 01–07, Feb. 2017.
[35] A. Abuobieda, N. Salim, Y. J. Kumar, and A. H. Osman, “Opposition differential evolution based method for text summarization,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2013, vol. 7802 LNAI, no. PART 1, pp. 487–496.
[36] J. Kupiec, J. Pedersen, and F. Chen, “A trainable document summarizer,” in Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR ’95, 1995, pp. 68–73.
[37] H. Lin and J. Bilmes, “Multi-document Summarization via Budgeted Maximization of Submodular Functions.”, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Los Angeles, California, June, 2010
[38] H. Kobayashi, M. Noguchi, and T. Yatsuka, "Summarization based on embedding distributions". In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal: Association for Computational Linguistics, pp. 1984–1989, doi:10.18653/v1/D15-1232, 2015
[39] J. P. A. Vieira and R. S. Moura, “An analysis of convolutional neural networks for sentence classification,” 2017 43rd Lat. Am. Comput. Conf. CLEI 2017, vol. 2017-Janua, pp. 1–5, 2017.
[40] J. Liu, W.-C. Chang, Y. Wu, and Y. Yang, “Deep Learning for Extreme Multi-label Text Classification,” Proc. 40th Int. ACM SIGIR Conf. Res. Dev. Inf. Retr. - SIGIR ’17, pp. 115–124, 2017.
[41] H. Xu, M. Dong, D. Zhu, A. Kotov, A. I. Carcone, and S. Naar-King, “Text Classification with Topic-based Word Embedding and Convolutional Neural Networks,” in Proceedings of the 7th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics - BCB ’16, 2016, pp. 88–97.
[42] C. I. Tsai, H. T. Hung, K. Y. Chen, and B. Chen, “Extractive speech summarization leveraging convolutional neural network techniques,” 2016 IEEE Work. Spok. Lang. Technol. SLT 2016 - Proc., pp. 158–164, 2017.
[43] P. Ren, Z. Chen, Z. Ren, F. Wei, J. Ma, and M. de Rijke, “Leveraging Contextual Sentence Relations for Extractive Summarization Using a Neural Attention Model,” Proc. 40th Int. ACM SIGIR Conf. Res. Dev. Inf. Retr. - SIGIR ’17, pp. 95–104, 2017.
[44] Y. Zhang, M. J. Er, and M. Pratama, “Extractive document summarization based on convolutional neural networks,” IECON 2016 - 42nd Annu. Conf. IEEE Ind. Electron. Soc., pp. 918–922, 2016.
[45] R. Nallapati, F. Zhai, and B. Zhou, “SummaRuNNer : A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents,” in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 2016, pp. 3075–3081.
[46] A. M. Rush, S. Chopra, and J. Weston, “A Neural Attention Model for Abstractive Sentence Summarization,” in Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, pp. 379–389.
[47] R. Nallapati, B. Zhou, C. N. dos Santos, C. Gulcehre, and B. Xiang, “Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond,” in Proc. of The 20th SIGNLL Conference on Computational Natural Language Learning, 2016, pp. 280–290.
[48] K.-Y. Chen et al., “Extractive Broadcast News Summarization LeveragingRecurrent Neural Network Language Modeling Techniques,” Audio, Speech, Lang. Process. IEEE/ACM Trans., pp. 1322–1334, 2015.
[49] M. Aparício, P. Figueiredo, F. Raposo, D. Martins De Matos, R. Ribeiro, and L. Marujo, “Summarization of films and documentaries based on subtitles and scripts,” Pattern Recognit. Lett., vol. 73, no. 1, 2016, pp. 7–12,
[50] J. L. Neto, A. A. Freitas Celso, and A. A. Kaestner, “Automatic Text Summarization using a Machine Learning Approach,” in Advances in Artificial Intelligence, 16th Brazilian Symposium on Artificial, 2002, pp. 205–215.
[51] D. Miller, “Leveraging BERT for Extractive Text Summarization on Lectures,” Comput. Lang., Jun. 2019.
[52] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Distributed Representations of Words and Phrases and their Compositionality.”, NIPS′13: Proceedings of the 26th International Conference on Neural Information Processing Systems, Vol. 2, December 2013, pp.3111–3119
[53] J. Pennington, R. Socher, and C. D. Manning, “GloVe: Global Vectors for Word Representation.”, In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1532–1543). Doha, Qatar: Association for Computational Linguistics. https://doi.org/10.3115/v1/D14-1162.
[54] A. Galassi, M. Lippi, and P. Torroni, “Attention in Natural Language Processing.”, IEEE Transactions on Neural Networks and Learning Systems (2020), DOI: 10.1109/TNNLS.2020.3019893, September, pp.1-8, 2020
[55] D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine Translation By Jointly Learning To Align And Translate.”, The International Conference on Learning Representations (ICLR), 34, 1–24. http://arXiv:1409.0473. 2015
[56] Y. Zhou, J. Xu, J. Cao, B. Xu, C. Li, and B. Xu, “Hybrid Attention Networks for Chinese Short Text Classification,” Comput. y Sist., vol. 21, no. 4, pp. 759–769, 2017.
[57] W. Yin, H. Schütze, S. Schütze, B. Xiang, and B. Zhou, “ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs.” Computation and Language, DOI: 10.1162/tacl_a_00244. arXiv:1512.05193, 2015
[58] Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy, “Hierarchical Attention Networks for Document Classification.”, In HLT-NAACL 2016 (pp. 1480–1489). San Diego, California: Association for Computational Linguistics. 10.18653/v1/N16-1174.
[59] P. Zhou et al., “Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification.”, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Association for Computational Linguistics, 2016
[60] K. Al-Sabahi, Z. Zuping, and M. Nadher, “A Hierarchical Structured Self-Attentive Model for Extractive Document Summarization (HSSAS),” IEEE Access, 6, 24205–24212. https://doi.org/10.1109/ACCESS.2018.2829199, May 2018.
[61] F. Zhao, B. Quan, J. Yang, J. Chen, Y. Zhang, and X. Wang, “Document Summarization using Word and Part-of-speech based on Attention Mechanism,” Journal of Physics, 1168. https://doi.org/10.1088/1742-6596/1168/3/032008, pp. 32008, 2019.
[62] “Rapid automatic keyword extraction for information retrieval and analysis,” Sep. 2009.
[63] F. Morin and Y. Bengio, “Hierarchical Probabilistic Neural Network Language Model.”
[64] L. Van-Duyet, V. M. Quan, and D. Q. An, “Skill2vec: Machine Learning Approach for Determining the Relevant Skills from Job Description,” Jul. 2017.
[65] J. Lee, G. Research, N. Kothari, and P. Natsev, “Content-based Related Video Recommendations.”, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
[66] P. Covington, J. Adams, and E. Sargin, “Deep Neural Networks for YouTube Recommendations,” Proc. 10th ACM Conf. Recomm. Syst. - RecSys ’16, pp. 191–198, 2016.
[67] A. Jadhav and V. Rajan, “Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks.”, In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 142–151). Melbourne, Australia: Association for Computational Linguistics. 10.18653/v1/P18-1014.
[68] Y. Dong, Y. Shen, E. Crawford, , H.van Hoof, and J. C. K. Cheung, "Banditsum: Extractive summarization as a contextual bandit", In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 373- 3748), Brussels, Belgium: Association for Computational Linguistics. doi:10.18653/v1/D18-1409, 2018.
[69] P. Bhaskar and S. Bandyopadhyay, “A Query Focused Multi Document Automatic Summarization.”, In Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation (pp. 545–554). Tohoku University, Sendai, Japan
[70] N. Pappas and A. Popescu-Belis, “Combining Content with User Preferences for TED Lecture Recommendation.”, In 2013 11th International Workshop on Content-Based Multimedia Indexing (CBMI) (pp. 47–52). Veszprem, Hungary. doi:10.1109/CBMI.2013.6576551
[71] N. Pappas and A. Popescu-Belis, “Sentiment Analysis of User Comments for One-Class Collaborative Filtering over TED Talks.”, In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval (pp. 773–776). Dublin, Ireland. doi:10.1145/2484028.2484116.
[72] K. Moritz et al., “Teaching Machines to Read and Comprehend.”, In Proceedings of the 28th International Conference on Neural Information Processing Systems. MIT Press, Cambridge, MA, United States volume 1. URL: https://arxiv.org/abs/1506.03340, pp. 1693–1701)
[73] J. R. Lewis and J. R., “IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use,” Int. J. Hum. Comput. Interact., vol. 7, no. 1, pp. 57–78, Jan. 1995.
[74] C. R. Rao, “Diversity and dissimilarity coefficients: A unified approach,” Theor. Popul. Biol., vol. 21, no. 1, pp. 24–43, Feb. 1982.
[75] K. Bache, D. Newman, and P. Smyth, “Text-based measures of document diversity,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’13, 2013, p. 23.
[76] C.-Y. Lin, “ROUGE: A Package for Automatic Evaluation of Summaries.”, In Text Summarization Branches Out (pp. 74–81). Barcelona, Spain: Association for Computational Linguistics. URL: https://www.aclweb.org/anthology/W04-1013.
[77] Q. Zhou, N. Yang, F. Wei, S. Huang, M. Zhou, and T. Zhao, "Neural document summarization by jointly learning to score and select sentences", In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 654–663), Melbourne, Australia: Association for Computational Linguistics, doi:10.18653/v1/P18-1061, 2018
[78] S. Narayan, S. B. Cohen, and M. Lapata, “Ranking Sentences for Extractive Summarization with Reinforcement Learning,” pp. 1747–1759, 2018.
[79] J. Shin, Y.Kim, S. Yoon, and K. Jung, “Contextual-CNN: A novel architecture capturing unified meaning for sentence classification”, In 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), (pp. 491–494), doi:10.1109/BigComp.2018.00079, 2018
[80] J. Liu, W.C. Chang, Y. Wu, and Y.Yang, “Deep learning for extreme multi-label text classification”, In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, pp. 115–124, doi:10.1145/3077136.3080834, 2017
指導教授 施國琛 教授(Prof. Timothy K. Shih) 審核日期 2021-6-9
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明