博碩士論文 109522078 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:61 、訪客IP:18.218.184.214
姓名 黃覺修(Jue-Xiu Hunag)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 應用事件擷取於故事理解之研究
(Story Retelling and Summarization via Story Event Extraction)
相關論文
★ 行程邀約郵件的辨識與不規則時間擷取之研究★ NCUFree校園無線網路平台設計及應用服務開發
★ 網際網路半結構性資料擷取系統之設計與實作★ 非簡單瀏覽路徑之探勘與應用
★ 遞增資料關聯式規則探勘之改進★ 應用卡方獨立性檢定於關連式分類問題
★ 中文資料擷取系統之設計與研究★ 非數值型資料視覺化與兼具主客觀的分群
★ 關聯性字組在文件摘要上的探討★ 淨化網頁:網頁區塊化以及資料區域擷取
★ 問題答覆系統使用語句分類排序方式之設計與研究★ 時序資料庫中緊密頻繁連續事件型樣之有效探勘
★ 星狀座標之軸排列於群聚視覺化之應用★ 由瀏覽歷程自動產生網頁抓取程式之研究
★ 動態網頁之樣版與資料分析研究★ 同性質網頁資料整合之自動化研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 摘要作為人們快速了解資訊的手段,一直以來都是自然語言處理研究的主要方向之一。現今的摘要模型主要都是依靠深度學習模型,讓模型自己決定文章的重點以及摘要生成的內容,因此人為可控制的因素較小。而本論文認為在某些摘要的應用場景中,摘要的重點不應該只依靠模型本身決定,而需要一些其他的資訊來輔助模型產生更貼近文章重點的摘要。最終,我們在現有摘要模型的輸入上做一些改動,使其能夠產生相對應內容的摘要。除此之外,我們也針對資訊擷取模型進行遷移式學習,使其能更適合應用於我們的使用場景。
摘要(英) Abstract is the main method that help people quickly understand the information of the article, and it is also a main research topic of Natural Language Processing. Modern abstractive summarization model mainly relies on deep learning methods, and need model itself to determine the key point of the article and the content of the abstract, there few human control factors in it. In this paper, we believe that in some scenarios of summarization, the content of the abstract should not only rely on model itself, we need to give more additional information to help model generate topic related abstract. Finally, we modify the input of the model to allow it generate the abstract with corresponding content. Additionally, we apply transfer learning on existing information extraction model to help it more suitable in our scenario.
關鍵字(中) ★ 機器自動摘要
★ 事件擷取
★ 遷移式學習
關鍵字(英) ★ Abstractive Summarization
★ Event Extraction
★ Transfer Learning
論文目次 中文摘要............................................................................................................... i
英文摘要............................................................................................................... ii
目錄 ...................................................................................................................... iii
圖目錄 .................................................................................................................. v
表目錄 .................................................................................................................. vi
一、 介紹 ................................................................................................ 1
1.1 問題挑戰 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 目標 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
二、 相關研究 ......................................................................................... 3
2.1 Abstractive Text Summarization . . . . . . . . . . . . . . . . . . 3
2.2 Information Extraction . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 NER and Relation Extraction . . . . . . . . . . . . . . . . . . . 6
2.2.2 Event Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.3 Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 8
三、 故事事件擷取.................................................................................. 12
3.1 任務描述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 使用方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.1 故事事件標記 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.2 標記系統 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.3 遷移式事件擷取模型架構 . . . . . . . . . . . . . . . . . . . . . . 14
3.3 資料集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4 評估方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5 實驗結果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
四、 故事摘要 ......................................................................................... 19
4.1 任務描述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 使用方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2.1 以 Text2Event 作為事件擷取來源 . . . . . . . . . . . . . . . . . 19
4.2.2 以語義角色標註作為事件擷取來源 . . . . . . . . . . . . . . . . 21
4.3 資料集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4 評估方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.5 實驗結果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
4.5.1 人工評估 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.5.2 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
五、 結論 ................................................................................................ 29
參考文獻...............................................................................................................30
參考文獻 [1] Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, and Chandan K. Reddy. Neu-
ral abstractive text summarization with sequence-to-sequence models. Trans.
Data Sci., 2(1):1:1–1:37, 2021.
[2] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you
need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish-
wanathan, and R. Garnett, editors, Advances in Neural Information Processing
Systems, volume 30. Curran Associates, Inc., 2017.
[3] Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun,
Meng Liao, and Shaoyi Chen. Text2Event: Controllable sequence-to-structure
generation for end-to-end event extraction. In Proceedings of the 59th Annual
Meeting of the Association for Computational Linguistics and the 11th Inter-
national Joint Conference on Natural Language Processing (Volume 1: Long
Papers), pages 2795–2806, Online, August 2021. Association for Computational
Linguistics.
[4] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo
Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky.
Domain-adversarial training of neural networks. The journal of machine learn-
ing research, 17(1):2096–2030, 2016.
[5] Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Don’t give me the de-
tails, just the summary! Topic-aware convolutional neural networks for extreme
summarization. In Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing, Brussels, Belgium, 2018.
[6] Alexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention
model for abstractive sentence summarization. In Proceedings of the 2015 Con-
ference on Empirical Methods in Natural Language Processing, pages 379–389,
Lisbon, Portugal, September 2015. Association for Computational Linguistics.
[7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT:
Pre-training of deep bidirectional transformers for language understanding. InProceedings of the 2019 Conference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Language Technologies, Volume
1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June
2019. Association for Computational Linguistics.
[8] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya
Sutskever. Language models are unsupervised multitask learners. 2019.
[9] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrah-
man Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART:
Denoising sequence-to-sequence pre-training for natural language generation,
translation, and comprehension. In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics, pages 7871–7880, Online, July
2020. Association for Computational Linguistics.
[10] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of
transfer learning with a unified text-to-text transformer. Journal of Machine
Learning Research, 21(140):1–67, 2020.
[11] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a
method for automatic evaluation of machine translation. In Proceedings of the
40th Annual Meeting of the Association for Computational Linguistics, pages
311–318, Philadelphia, Pennsylvania, USA, July 2002. Association for Compu-
tational Linguistics.
[12] Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In
Text Summarization Branches Out, pages 74–81, Barcelona, Spain, July 2004.
Association for Computational Linguistics.
[13] Yaser Keneshloo, Tian Shi, Naren Ramakrishnan, and Chandan K. Reddy. Deep
reinforcement learning for sequence-to-sequence models. IEEE Transactions on
Neural Networks and Learning Systems, 31(7):2469–2489, 2020.
[14] Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng,
Xuedong Huang, and Meng Jiang. Enhancing factual consistency of abstractive
summarization. In Proceedings of the 2021 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language
Technologies, pages 718–733, Online, June 2021. Association for Computational
Linguistics.
[15] Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang
Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, YisiSang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and
Mark Warschauer. Fantastic questions and where to find them: FairytaleQA –
an authentic dataset for narrative comprehension. In Proceedings of the 60th
Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 447–460, Dublin, Ireland, May 2022. Association for Com-
putational Linguistics.
[16] Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd.
spacy: Industrial-strength natural language processing in python. 2020.
[17] Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi,
Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. Al-
lenNLP: A deep semantic natural language processing platform. In Proceedings
of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6, Melbourne,
Australia, July 2018. Association for Computational Linguistics.
[18] Ramesh Nallapati, Bowen Zhou, Cícero Nogueira dos Santos, Çaglar Gülçehre,
and Bing Xiang. Abstractive text summarization using sequence-to-sequence
rnns and beyond. In Yoav Goldberg and Stefan Riezler, editors, Proceedings
of the 20th SIGNLL Conference on Computational Natural Language Learning,
CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 280–290. ACL, 2016.
[19] Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt,
Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read
and comprehend. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi
Sugiyama, and Roman Garnett, editors, Advances in Neural Information Pro-
cessing Systems 28: Annual Conference on Neural Information Processing Sys-
tems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693–1701,
2015.
指導教授 張嘉惠(Chia-Hui Chang) 審核日期 2022-9-22
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明