參考文獻 |
[1]Zexuan Zhong and Danqi Chen. A frustratingly easy approach for entity and relation extraction. In North American Association for Computational Linguistics (NAACL), 2021.
[2]Yu-Ming Shang, Heyan Huang, and Xianling Mao. Onerel: Joint entity and relation extraction with one module in one step. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 36, pages 11285–11293, 2022.
[3]Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and Mark Warschauer. Fantastic questions and where to find them: FairytaleQA – an authentic dataset for narrative comprehension. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 447–460, Dublin, Ireland, May 2022. Association for Computational Linguistics.
[4]Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57:45, 2006.
[5]Alison H Paris and Scott G Paris. Assessing narrative comprehen- sion in young children. Reading Research Quarterly, 38(1):36–76, 2003.
[6]Rubel Das, Antariksha Ray, Souvik Mondal, and Dipankar Das. A rule based question generation framework to deal with simple and complex sentences. In 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pages 542–548. IEEE, 2016.
[7]Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 662–671. Springer, 2017.
[8]Xinya Du, Junru Shao, and Claire Cardie. Learning to ask: Neu- ral question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342–1352, Vancouver, Canada, July 2017. Association for Computational Linguistics.
[9]Chin-Yew Lin. Rouge: A package for automatic evaluation of sum- maries. In Text summarization branches out, pages 74–81, 2004.
[10]Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318, 2002.
[11]Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas, November 2016. Association for Computational Linguistics.
[12]Rajarshi Das, Manzil Zaheer, Siva Reddy, and Andrew McCallum. Question answering on knowledge bases and text using universal schema and memory networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 358–365, Vancouver, Canada, July 2017. Association for Computational Linguistics.
[13]Xiao Huang, Jingyuan Zhang, Dingcheng Li, and Ping Li. Knowl- edge graph embedding based question answering. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 105–113, 2019.
[14]Ellen Riloff and Michael Thelen. A rule-based question answer- ing system for reading comprehension tests. In ANLP-NAACL 2000 Workshop: Reading Comprehension Tests as Evaluation for Computer-Based Language Understanding Systems, 2000.
[15]Xiangyang Mou, Chenghao Yang, Mo Yu, Bingsheng Yao, Xiaox- iao Guo, Saloni Potdar, and Hui Su. Narrative question answering with cutting-edge open-domain QA techniques: A comprehensive study. Transactions of the Association for Computational Linguis- tics, 9:1032–1046, 2021.
[16]Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. Named entity recognition with context-aware dictionary knowledge. In Pro- ceedings of the 19th Chinese National Conference on Computational Linguistics, pages 915–926, Haikou, China, October 2020. Chinese Information Processing Society of China.
[17]Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge and Data Engineering, 34(1):50–70, 2020.
[18]Lawrence R Rabiner. A tutorial on hidden markov models and se- lected applications in speech recognition. Readings in speech recog- nition, pages 267–296, 1990.
[19]Andrew McCallum, Dayne Freitag, and Fernando C. N. Pereira. Maximum entropy markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Con- ference on Machine Learning, ICML ’00, page 591598, San Fran- cisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc.
[20]John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth Interna- tional Conference on Machine Learning, ICML ’01, page 282289, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc.
[21]Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Ko- ray Kavukcuoglu, and Pavel Kuksa. Natural language process-ing (almost) from scratch. Journal of machine learning research, 12(ARTICLE):2493–2537, 2011.
[22]Zhiheng Huang, Wei Xu, and Kai Yu. Bidirectional lstm-crf models for sequence tagging. CoRR, abs/1508.01991, 2015.
[23]Zhenjin Dai, Xutao Wang, Pin Ni, Yuming Li, Gangmin Li, and Xuming Bai. Named entity recognition using bert bilstm crf for chi- nese electronic health records. In 2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and In- formatics (CISP-BMEI), pages 1–5, 2019.
[24]Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. Relation classification via multi-level attention CNNs. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 1298–1307, Berlin, Germany, August 2016. Association for Computational Linguistics.
[25]David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Ha- jishirzi. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Em- pirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 5784–5789, Hong Kong, China, November 2019. Association for Computational Linguistics.
[26]Makoto Miwa and Mohit Bansal. End-to-end relation extraction us- ing LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 1105–1116, Berlin, Germany, August 2016. Association for Computational Linguistics.
[27]Suncong Zheng, Yuexing Hao, Dongyuan Lu, Hongyun Bao, Ji- aming Xu, Hongwei Hao, and Bo Xu. Joint entity and relation ex- traction based on a hybrid neural network. Neurocomputing, 257:59– 66, 2017.
[28]Kui Xue, Yangming Zhou, Zhiyuan Ma, Tong Ruan, Huanhuan Zhang, and Ping He. Fine-tuning bert for joint entity and relation extraction in chinese medical text. In 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 892– 897. IEEE, 2019.
[29]Wei Xiang and Bang Wang. A survey of event extraction from text.
IEEE Access, 7:173111–173137, 2019.
[30]Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, and Shaoyi Chen. Text2Event: Controllable sequence-to-structure generation for end-to-end event extraction. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2795–2806, Online, August 2021. Association for Computational Linguistics.
[31]Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020.
[32]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online, July 2020. As- sociation for Computational Linguistics.
[33]Bingsheng Yao, Dakuo Wang, Tongshuang Wu, Zheng Zhang, Toby Li, Mo Yu, and Ying Xu. It is AI’s turn to ask humans a question: Question-answer pair generation for children’s story books. In Pro- ceedings of the 60th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 731–744, Dublin, Ireland, May 2022. Association for Computational Linguistics.
[34]Matthew Honnibal and Ines Montani. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural net- works and incremental parsing. To appear, 2017.
[35]Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. AllenNLP: A deep semantic natural language process- ing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6, Melbourne, Australia, July 2018. Association for Computational Linguistics.
[36]Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
[37]Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gim- pel, Piyush Sharma, and Radu Soricut. ALBERT: A lite BERT for self-supervised learning of language representations. CoRR, abs/1909.11942, 2019.
[38]Deming Ye, Yankai Lin, Peng Li, and Maosong Sun. Packed levi- tated marker for entity and relation extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 4904–4917, Dublin, Ireland, May 2022. Association for Computational Linguistics. |