博碩士論文 111525018 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:80 、訪客IP:18.220.116.34
姓名 黃淯銘(Yu-Ming Huang)  查詢紙本館藏   畢業系所 軟體工程研究所
論文名稱 透過事件和關係擷取控制問題生成
(Controlling Question Generation Through Events and Relation Extraction)
相關論文
★ 行程邀約郵件的辨識與不規則時間擷取之研究★ NCUFree校園無線網路平台設計及應用服務開發
★ 網際網路半結構性資料擷取系統之設計與實作★ 非簡單瀏覽路徑之探勘與應用
★ 遞增資料關聯式規則探勘之改進★ 應用卡方獨立性檢定於關連式分類問題
★ 中文資料擷取系統之設計與研究★ 非數值型資料視覺化與兼具主客觀的分群
★ 關聯性字組在文件摘要上的探討★ 淨化網頁:網頁區塊化以及資料區域擷取
★ 問題答覆系統使用語句分類排序方式之設計與研究★ 時序資料庫中緊密頻繁連續事件型樣之有效探勘
★ 星狀座標之軸排列於群聚視覺化之應用★ 由瀏覽歷程自動產生網頁抓取程式之研究
★ 動態網頁之樣版與資料分析研究★ 同性質網頁資料整合之自動化研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2024-12-31以後開放)
摘要(中) 問題生成技術在教育和自動化問答系統等多個領域中具有重要應用。其主要目的是通過從文本中自動生成問題與答案,幫助教師檢視學生的理解程度,並提供個性化學習,以提升學習者的參與感和學習效果。過往研究多集中於文本過濾和答案生成,但在問題難度控制方面仍然存在挑戰。例如,傳統的方法主要依賴答案和文本範圍來生成問題,難以有效控制問題的難度。此外,現有的問題生成模型對於多跳推理和跨段落問題的生成能力較弱。

本研究提出了一種透過事件和關係擷取來控制問題生成的方法,旨在提升問題生成的品質與多樣性。原有的FairytaleQA資料集僅包含文本、問題與答案,缺乏額外的事件與關係資訊。因此,我們邀請標記人員對FairytaleQA資料集進行標註,為資料集中的問題與答案標記相關的事件與關係資訊,增加資料集的豐富程度。

接著,我們利用這些標註後的資料訓練模型,並基於以下兩項規則生成不同難易度的問題:(1)替換事件參數中的代名詞,通過提供主詞與代名詞的資訊,使模型能夠理解事件擷取中的代名詞代表的含義,從而生成相應的問題;(2)串聯關係擷取中的事件,通過串聯關係擷取中的多起事件,讓模型知曉更多的上下文資訊,由此可以生成出跨段落的問題。研究結果顯示,當我們給予模型更多的資訊時,所生成的問題難度也會顯著提升,困難問題的佔比從 39%提升至 45%。另外,當主詞與代名詞的段落相隔越遠或者當兩起資訊相隔越遠時,所生成的問題難度也顯著提升,困難問題的占比從 33% 提升至 45%。

本研究的方法不僅提高了問題生成的複雜性和挑戰性,還能更好地控制問題的難易度。這對於個性化教育應用和自動化問答系統具有潛在的重要意義,能夠有效提升學習者的成就感與參與感。
摘要(英) Question generation technology has significant applications in various fields such as education and automated question-answering systems. Its primary purpose is to automatically generate questions and answers from text, helping teachers assess students′ understanding and providing personalized learning to enhance learner engagement and learning outcomes. Previous research has focused mainly on text filtering and answer generation, but challenges remain in controlling question difficulty. For example, traditional methods primarily rely on answers and text range to generate questions, making it difficult to effectively control the difficulty of the questions.

This study proposes a method to control question generation by extracting event and relation to improve the quality and diversity of question generation. The original FairytaleQA dataset only includes text, questions, and answers, lacking additional event and relation information. Therefore, we invited annotators to label the FairytaleQA dataset, marking relevant events and relation information for the questions and answers in the dataset, thereby enriching the dataset.

Next, we trained the model using this annotated data and generated questions of varying difficulty based on the following two rules: (1) Replacing pronouns in event parameters, by providing information on subjects and pronouns, enabling the model to understand the meaning of pronouns in event extraction and thereby generate corresponding questions; (2) Linking events in relation extraction, by linking multiple events in relation extraction, allowing the model to know more contextual information, thus generating cross-paragraph questions. The research results show that when we provide the model with more events, the difficulty of the generated questions significantly increases, with the proportion of difficult questions rising from 39% to 45%. Additionally, when the paragraphs of subjects and pronouns are further apart or when two events are further apart, the difficulty of the generated questions also significantly increases, with the proportion of difficult questions rising from 33% to 45%.
關鍵字(中) ★ 問題生成
★ 事件擷取
★ 關係擷取
★ 問題難度控制
關鍵字(英) ★ Question Generation
★ Event Extraction
★ Relation Extraction
★ Difficulty Controllable
論文目次 摘要. . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

Abstract . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

目錄. . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

圖目錄 . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

表目錄 . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

一、
緒論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1

1-1
目標 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2

1-2
挑戰 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2

1-3
貢獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3

二、
相關研究 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4

2-1
問題生成 (Question Generation) . . . . . . . . . . . . . . .
4

2-2
問題難度控制 . . . . . . . . . . . . . . . . . . . . . . . . .
5

2-3
資訊擷取 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6

2-3-1
ACE 自動內容擷取 [1] . . . . . . . . . . . . . . . . . . . .
6

2-3-2
ATOMIC 事件關係 [2] . . . . . . . . . . . . . . . . . . . .
6

2-4
語言模型 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7

2-4-1
Sentence Transformers . . . . . . . . . . . . . . . . . . . .
7

三、
資訊擷取 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8

3-1
關係/事件擷取資料集 . . . . . . . . . . . . . . . . . . . . .
8

3-1-1
關係/事件擷取標記內容 . . . . . . . . . . . . . . . . . . .
9

3-1-2
標記人員一致性 . . . . . . . . . . . . . . . . . . . . . . . .
11

3-2
資訊擷取模型 . . . . . . . . . . . . . . . . . . . . . . . . .
13

3-2-1
資訊擷取資料集 . . . . . . . . . . . . . . . . . . . . . . . .
13

3-2-2
訓練資訊擷取模型 . . . . . . . . . . . . . . . . . . . . . . .
13

3-2-3
評估資訊擷取模型 . . . . . . . . . . . . . . . . . . . . . . .
15

四、
控制問題難易度生成 . . . . . . . . . . . . . . . . . . . . .
16

4-1
問題生成與過濾模型資料集 . . . . . . . . . . . . . . . . .
16

4-2
問題生成模型 . . . . . . . . . . . . . . . . . . . . . . . . .
17

4-2-1
問題生成結果 . . . . . . . . . . . . . . . . . . . . . . . . .
18

4-2-2
控制問題難易度 . . . . . . . . . . . . . . . . . . . . . . . .
19
4-3
過濾模型 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25

4-3-1
過濾模型結果 . . . . . . . . . . . . . . . . . . . . . . . . .
25

五、
評估問題難易度 . . . . . . . . . . . . . . . . . . . . . . . .
27

5-1
評估方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27

5-2
問題回答資料集 . . . . . . . . . . . . . . . . . . . . . . . .
27

5-3
問題回答模型 . . . . . . . . . . . . . . . . . . . . . . . . .
27

5-3-1
問題回答模型結果 . . . . . . . . . . . . . . . . . . . . . . .
28

5-4
評估生成問題難易度 . . . . . . . . . . . . . . . . . . . . .
28

5-4-1
生成標準答案 . . . . . . . . . . . . . . . . . . . . . . . . .
28

5-4-2
分析討論 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30

5-4-3
案例探討 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31

六、
結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34

索引. . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

參考文獻 . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
參考文獻 [1] Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57:45, 2006.
[2] Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3027–3035, 2019.
[3] Asahi Ushio, Fernando Alva-Manchego, and Jose Camacho-Collados. An empirical comparison of LM-based question and answer generation meth- ods. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 14262–14272, Toronto, Canada, July 2023. Association for Computational Linguistics.
[4] Yi Cheng, Siyao Li, Bang Liu, Ruihui Zhao, Sujian Li, Chenghua Lin, and Yefeng Zheng. Guiding the growth: Difficulty-controllable question gener- ation through step-by-step rewriting. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5968–5978, Online, August 2021. Association for Com- putational Linguistics.
[5] Yifan Gao, Lidong Bing, Wang Chen, Michael R Lyu, and Irwin King. Difficulty controllable generation of reading comprehension questions. arXiv preprint arXiv:1807.03586, 2018.
[6] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Pro- ceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas, November 2016. Association for Computational Linguistics.
[7] Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sor- doni, Philip Bachman, and Kaheer Suleman. NewsQA: A machine com- prehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200, Vancouver, Canada, August 2017. Asso- ciation for Computational Linguistics.
[8] Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317–328, 2018.
[9] Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and Mark Warschauer. Fantastic questions and where to find them: FairytaleQA
– an authentic dataset for narrative comprehension. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 447–460, Dublin, Ireland, May 2022. Association for Computational Linguistics.
[10] Ghader Kurdi, Jared Leo, Bijan Parsia, Uli Sattler, and Salam Al-Emari. A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30:121–204, 2020.
[11] Masaki Uto, Yuto Tomikawa, and Ayaka Suzuki. Difficulty-controllable neural question generation for reading comprehension using item response theory. In Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, and Torsten Zesch, editors, Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 119–129, Toronto, Canada, July 2023. Association for Computational Linguistics.
[12] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
[13] Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Nora Bradford, Branda Sun, Tran Bao Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and Mark Warschauer. Fantastic questions and where to find them: FairytaleQA – an authentic dataset for narrative comprehension. As- sociation for Computational Linguistics, 2022.
[14] Di Lu, Shihao Ran, Joel Tetreault, and Alejandro Jaimes. Event extraction as question generation and answering. arXiv preprint arXiv:2307.05567, 2023.
[15] Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. Crosslingual generalization through multi- task finetuning. arXiv preprint arXiv:2211.01786, 2022.
[16] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020.
[17] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. Journal of Machine Learning Research, 25(70):1–53, 2024.
[18] Bingsheng Yao, Dakuo Wang, Tongshuang Wu, Zheng Zhang, Toby Li, Mo Yu, and Ying Xu. It is AI’s turn to ask humans a question: Question- answer pair generation for children’s story books. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 731–744, Dublin, Ireland, May 2022. Association for Computational Linguistics.
[19] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics.
[20] Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics.
[21] Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. AllenNLP: A deep semantic natural language processing platform. In ceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6, Melbourne, Australia, July 2018. Association for Computational Linguistics .
[22] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distil- bert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
[23] Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. De- berta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654, 2020.
指導教授 張嘉惠(Chia-Hui Chang) 審核日期 2024-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明