參考文獻 |
參 考 文 獻
[1] Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han,
Le Sun, and Hua Wu. Unified structure generation for universal information
extraction. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio,
editors, Proceedings of the 60th Annual Meeting of the Association for Com-
putational Linguistics (Volume 1: Long Papers), pages 5755–5772, Dublin,
Ireland, May 2022. Association for Computational Linguistics.
[2] Somin Wadhwa, Silvio Amir, and Byron Wallace. Revisiting relation ex-
traction in the era of large language models. In Anna Rogers, Jordan
Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long
Papers), pages 15566–15589, Toronto, Canada, July 2023. Association for
Computational Linguistics.
[3] Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda.
ACE 2005 multilingual training corpus. In Linguistic Data Consortium,
2006.
[4] Dan Roth and Wen-tau Yih. A linear programming formulation for global
inference in natural language tasks. In Proceedings of the Eighth Confer-
ence on Computational Natural Language Learning (CoNLL-2004) at HLT-
NAACL 2004, pages 1–8, Boston, Massachusetts, USA, May 6 - May 7 2004.
Association for Computational Linguistics.
[5] Sebastian Riedel, Limin Yao, and Andrew McCallum. Modeling relations
and their mentions without labeled text. In Machine Learning and Knowl-
edge Discovery in Databases: European Conference, ECML PKDD 2010,
Barcelona, Spain, September 20-24, 2010, Proceedings, Part III 21, pages
148–163. Springer, 2010.
[6] Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu,
Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. DocRED: A large-
scale document-level relation extraction dataset. In Anna Korhonen, David Traum, and Lluís Màrquez, editors, Proceedings of the 57th Annual Meeting
of the Association for Computational Linguistics, pages 764–777, Florence,
Italy, July 2019. Association for Computational Linguistics.
[7] Youmi Ma, An Wang, and Naoaki Okazaki. DREEAM: Guiding attention
with evidence for improving document-level relation extraction. In Andreas
Vlachos and Isabelle Augenstein, editors, Proceedings of the 17th Conference
of the European Chapter of the Association for Computational Linguistics,
pages 1971–1983, Dubrovnik, Croatia, May 2023. Association for Computa-
tional Linguistics.
[8] Guoquan Dai, Xizhao Wang, Xiaoying Zou, Chao Liu, and Si Cen. Mrgat:
Multi-relational graph attention network for knowledge graph completion.
Neural Networks, 154:234–245, 2022.
[9] Linfeng Li, Peng Wang, Jun Yan, Yao Wang, Simin Li, Jinpeng Jiang, Zhe
Sun, Buzhou Tang, Tsung-Hui Chang, Shenghui Wang, and Yuting Liu.
Real-world data medical knowledge graph: construction and applications.
Artificial Intelligence in Medicine, 103:101817, 2020.
[10] Jung-Jun Kim, Dong-Gyu Lee, Jialin Wu, Hong-Gyu Jung, and Seong-Whan
Lee. Visual question answering based on local-scene-aware referring expres-
sion generation. Neural Networks, 139:158–167, 2021.
[11] Apoorv Saxena, Aditay Tripathi, and Partha Talukdar. Improving multi-
hop question answering over knowledge graphs using knowledge base embed-
dings. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault,
editors, Proceedings of the 58th Annual Meeting of the Association for Com-
putational Linguistics, pages 4498–4507, Online, July 2020. Association for
Computational Linguistics.
[12] Marco Antonio Calijorne Soares and Fernando Silva Parreiras. A literature
review on question answering techniques, paradigms and systems. Journal of
King Saud University - Computer and Information Sciences, 32(6):635–646,
2020.
[13] Weizhao Li, Feng Ge, Yi Cai, and Da Ren. A conversational model for
eliciting new chatting topics in open-domain conversation. Neural Networks,
144:540–552, 2021.
[14] Yunyi Yang, Yunhao Li, and Xiaojun Quan. Ubar: Towards fully end-to-end
task-oriented dialog system with gpt-2. Proceedings of the AAAI Conference
on Artificial Intelligence, 35(16):14230–14238, May 2021.
[15] Ziran Li, Ning Ding, Zhiyuan Liu, Haitao Zheng, and Ying Shen. Chinese
relation extraction with multi-grained information and external linguistic
knowledge. In Anna Korhonen, David Traum, and Lluís Màrquez, editors,
Proceedings of the 57th Annual Meeting of the Association for Computational
Linguistics, pages 4377–4386, Florence, Italy, July 2019. Association for
Computational Linguistics.
[16] Jiaqi Hou, Xin Li, Haipeng Yao, Haichun Sun, Tianle Mai, and Rongchen
Zhu. Bert-based chinese relation extraction for public security. IEEE Access,
8:132367–132375, 2020.
[17] Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. Distant super-
vision for relation extraction without labeled data. In Keh-Yih Su, Jian Su,
Janyce Wiebe, and Haizhou Li, editors, Proceedings of the Joint Conference
of the 47th Annual Meeting of the ACL and the 4th International Joint Con-
ference on Natural Language Processing of the AFNLP, pages 1003–1011,
Suntec, Singapore, August 2009. Association for Computational Linguistics.
[18] Ang Sun, Ralph Grishman, and Satoshi Sekine. Semi-supervised relation ex-
traction with large-scale word clustering. In Dekang Lin, Yuji Matsumoto,
and Rada Mihalcea, editors, Proceedings of the 49th Annual Meeting of the
Association for Computational Linguistics: Human Language Technologies,
pages 521–529, Portland, Oregon, USA, June 2011. Association for Compu-
tational Linguistics.
[19] Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. GraphRel: Modeling text as
relational graphs for joint entity and relation extraction. In Anna Korhonen,
David Traum, and Lluís Màrquez, editors, Proceedings of the 57th Annual
Meeting of the Association for Computational Linguistics, pages 1409–1418,
Florence, Italy, July 2019. Association for Computational Linguistics.
[20] Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man
Lan, Shiliang Sun, and Nan Duan. Joint type inference on entities and rela-
tions via graph convolutional networks. In Anna Korhonen, David Traum,
and Lluís Màrquez, editors, Proceedings of the 57th Annual Meeting of the
Association for Computational Linguistics, pages 1361–1370, Florence, Italy,
July 2019. Association for Computational Linguistics.
[21] Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. A novel
cascade binary tagging framework for relational triple extraction. In Dan
Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault, editors, Pro-
ceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1476–1488, Online, July 2020. Association for Computa-
tional Linguistics.
[22] Bowen Yu, Zhenyu Zhang, Xiaobo Shu, Yubin Wang, Tingwen Liu, Bin
Wang, and Sujian Li. Joint extraction of entities and relations based on a
novel decomposition strategy. In Proc. of ECAI, 2020.
[23] Hengyi Zheng, Rui Wen, Xi Chen, Yifan Yang, Yunyan Zhang, Ziheng
Zhang, Ningyu Zhang, Bin Qin, Xu Ming, and Yefeng Zheng. PRGC: Poten-
tial relation and global correspondence based joint relational triple extrac-
tion. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors,
Proceedings of the 59th Annual Meeting of the Association for Computa-
tional Linguistics and the 11th International Joint Conference on Natural
Language Processing (Volume 1: Long Papers), pages 6225–6235, Online,
August 2021. Association for Computational Linguistics.
[24] Feiliang Ren, Longhui Zhang, Xiaofeng Zhao, Shujuan Yin, Shilei Liu, and
Bochao Li. A simple but effective bidirectional framework for relational
triple extraction, 2022.
[25] Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, and Weiming Lu. De-
bias for generative extraction in unified NER task. In Smaranda Muresan,
Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th An-
nual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 808–818, Dublin, Ireland, May 2022. Association for
Computational Linguistics.
[26] Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu,
and Limin Sun. TPLinker: Single-stage joint extraction of entities and rela-
tions through token pair linking. In Donia Scott, Nuria Bel, and Chengqing
Zong, editors, Proceedings of the 28th International Conference on Compu-
tational Linguistics, pages 1572–1582, Barcelona, Spain (Online), December
2020. International Committee on Computational Linguistics.
[27] Feiliang Ren, Longhui Zhang, Shujuan Yin, Xiaofeng Zhao, Shilei Liu,
Bochao Li, and Yaduo Liu. A novel global feature-oriented relational triple
extraction model based on table filling. In Marie-Francine Moens, Xuan-
jing Huang, Lucia Specia, and Scott Wen-tau Yih, editors, Proceedings of
the 2021 Conference on Empirical Methods in Natural Language Processing,
pages 2646–2656, Online and Punta Cana, Dominican Republic, November
2021. Association for Computational Linguistics.
[28] Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei Li, and Junchi Yan.
UniRE: A unified label space for entity relation extraction. In Chengqing
Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Proceedings of the
59th Annual Meeting of the Association for Computational Linguistics and
the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 220–231, Online, August 2021. Association
for Computational Linguistics.
[29] Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. Ex-
tracting relational facts by an end-to-end neural model with copy mecha-
nism. In Iryna Gurevych and Yusuke Miyao, editors, Proceedings of the 56th
Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 506–514, Melbourne, Australia, July 2018. Association
for Computational Linguistics.
[30] Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang,
Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Unified language model
pre-training for natural language understanding and generation. Advances
in neural information processing systems, 32, 2019.
[31] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrah-
man Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: De-
noising sequence-to-sequence pre-training for natural language generation,
translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
[32] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the lim-
its of transfer learning with a unified text-to-text transformer. Journal of
machine learning research, 21(140):1–67, 2020.
[33] Hongbin Ye, Ningyu Zhang, Shumin Deng, Mosha Chen, Chuanqi Tan, Fei
Huang, and Huajun Chen. Contrastive triple extraction with generative
transformer. In Proceedings of the AAAI conference on artificial intelligence,
volume 35, pages 14257–14265, 2021.
[34] Pere-Lluís Huguet Cabot and Roberto Navigli. REBEL: Relation extraction
by end-to-end language generation. In Marie-Francine Moens, Xuanjing
Huang, Lucia Specia, and Scott Wen-tau Yih, editors, Findings of the As-
sociation for Computational Linguistics: EMNLP 2021, pages 2370–2381,
Punta Cana, Dominican Republic, November 2021. Association for Compu-
tational Linguistics.
[35] Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin
Zhou, and Jiwei Li. Entity-relation extraction as multi-turn question an-
swering. In Anna Korhonen, David Traum, and Lluís Màrquez, editors,
Proceedings of the 57th Annual Meeting of the Association for Computa-
tional Linguistics, pages 1340–1350, Florence, Italy, July 2019. Association
for Computational Linguistics.
[36] Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen
Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, et al. Zero-
shot information extraction via chatting with chatgpt. arXiv preprint
arXiv:2302.10205, 2023.
[37] Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro
Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang, and
Stefano Soatto. Structured prediction as translation between augmented
natural languages. In 9th International Conference on Learning Represen-
tations, ICLR 2021, 2021.
[38] Hao Fei, Shengqiong Wu, Jingye Li, Bobo Li, Fei Li, Libo Qin, Meishan
Zhang, Min Zhang, and Tat-Seng Chua. Lasuie: Unifying information ex-
traction with latent adaptive structure-aware generative language model.
Advances in Neural Information Processing Systems, 35:15460–15475, 2022.
[39] Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaud-
hary, Francisco Guzmán, Armand Joulin, and Edouard Grave. CCNet:
Extracting high quality monolingual datasets from web crawl data. In Nico-
letta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christo-
pher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard,
Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, and Stelios
Piperidis, editors, Proceedings of the Twelfth Language Resources and Eval-
uation Conference, pages 4003–4012, Marseille, France, May 2020. European
Language Resources Association.
[40] Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve
Jégou, and Tomas Mikolov. Fasttext.zip: Compressing text classification
models. arXiv preprint arXiv:1612.03651, 2016.
[41] Hui Wu, Yuting He, Yidong Chen, Yu Bai, and Xiaodong Shi. Improv-
ing few-shot relation extraction through semantics-guided learning. Neural
Networks, 169:453–461, 2024.
[42] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou,
Aditya Siddhant, Aditya Barua, and Colin Raffel. mT5: A massively multilingual pre-trained text-to-text transformer. In Kristina Toutanova,
Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven
Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, editors,
Proceedings of the 2021 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies,
pages 483–498, Online, June 2021. Association for Computational Linguis-
tics. |