參考文獻 |
[1] Alexander Yates, Michele Banko, Matthew Broadhead, Michael Cafarella, Oren Etzioni, and Stephen Soderland. 2007. TextRunner: Open Information Extraction on the Web. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT). Association for Computational Linguistics, pages 25–26.
[2] Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2010. Semantic Role Labeling for Open Information Extraction. In Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading. Association for Computational Linguistics, pages 52–60.
[3] Harinder Pal and Mausam. 2016. Demonyms and Compound Relational Nouns in Nominal Open IE. In Proceedings of the 5th Workshop on Automated Knowledge Base Constructio. Association for Computational Linguistics, pages 35-39. 10.18653/v1/W16-1307
[4] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735.
[5] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin, Attention Is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017. page 5998--6008.
[6] Pennington, J., R. Socher, and C. Manning. 2014. GloVe: Global Vectors for Word Representation. in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1532–1543. 10.3115/v1/D14-1162
[7] Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 2019.BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, pages 4171–4186. 10.18653/v1/N19-1423.
[8] Liu Zhuang, Lin Wayne, Shi Ya, and Zhao Jun. 2021. A Robustly Optimized BERT Pre-training Approach with Post-training. In Proceedings of the 20th Chinese National Conference on Computational Linguistics, pages 1218–1227.
[9] Lung-Hao Lee and Yi Lu. 2021. Multiple Embeddings Enhanced Multi-Graph Neural Networks for Chinese Healthcare Named Entity Recognition. IEEE Journal of Biomedical and Health Informatics, pages 2801-2810.
[10] Fei Wu and Daniel S. Weld. 2010. Open Information Extraction Using Wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 118–127.
[11] Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open Language Learning for Information Extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 523–534.
[12] Qiu, L. and Y. Zhang. ZORE: A Syntax-based System for Chinese Open Relation Extraction. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. 2014. Association for Computational Linguistics, pages 1870–1880.
[13] Jia, S., et al., Chinese Open Relation Extraction and Knowledge Base Establishment. ACM Trans. ACM Transactions on Asian and Low-Resource Language Information Processing. 2018, pages 1–22.
[14] Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging Linguistic Structure For Open Domain Information Extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 344–354. 10.3115/v1/P15-1034
[15] Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer and Ido Dagan. Supervised Open Information Extraction. 2018. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, pages 344–354.
[16] Arpita Roy, Youngja Park, Taesung Lee, and Shimei Pan. 2019. Supervising Unsupervised Open Information Extraction Models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing . Association for Computational Linguistics, Association for Computational Linguistics, page 728–737. 10.18653/v1/D19-1067.
[17] Junlang Zhan and Hai Zhao. 2020. Span Model for Open Information Extraction on Accurate Corpus. In Proceedings of the AAAI Conference on Artificial Intelligence, 34(05) : 9523-9530.
[18] Ro, Y., Y. Lee, and P. Kang. 2020. Multi^2OIE: Multilingual Open Information Extraction Based on Multi-Head Attention with BERT. in Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, pages 1107–1117. 10.18653/v1/2020.findings-emnlp.99.
[19] 鄭少鈞。2022年1月。管道式語言轉譯器之中文健康照護開放資訊擷取。國立中央大學電機工程學系碩士論文 。
[20] Fader, A., S. Soderland, and O. 2011. Etzioni. Identifying Relations for Open Information Extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, page 1535–1545.
[21] Gashteovski, K., et al. 2019. OPIEC: An Open Information Extraction Corpus. CoRR, 2019. In Proceedings of the Conference of Automatic Knowledge Base Construction (AKBC) 2019.
[22] Nakashole, N., G. Weikum, and F. Suchanek. 2012. PATTY: A Taxonomy of Relational Patterns with Semantic Types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 1135–1145.
[23] Stanovsky, G. and I. Dagan. 2016. Creating a Large Benchmark for Open Information Extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2300–2305.
[24] Rudolf Schneider, Tom Oberhauser, Tobias Klatt, Felix A. Gers, and Alexander Löser. 2017. Analysing Errors of Open Information Extraction Systems. In Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems. Association for Computational Linguistics, pages 11–18. 10.18653/v1/W17-5402.
[25] Bhardwaj, S., S. Aggarwal, and M. Mausam. 2019. CaRB: A Crowdsourced Benchmark for Open IE. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing . Association for Computational Linguistics,pages 6262–6267.
[26] Lechelle, W., F. Gotti, and P. Langlais. 2019. WiRe57 : A Fine-Grained Benchmark for Open Information Extraction. In Proceedings of the 13th Linguistic Annotation Workshop. Association for Computational Linguistics, pages 6–15.
[27] Gashteovski, K., R. Gemulla, and L. del Corro. 2017. MinIE: Minimizing Facts in Open Information Extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2630–2640. 10.18653/v1/D17-1278.
[28] Yuen-Hsien Tseng, Lung-Hao Lee, Shu-Yen Lin, Bo-Shun Liao, Mei-Jun Liu, Hsin-Hsi Chen, Oren Etzioni and Anthony Fader. 2014. Chinese Open Relation Extraction for Knowledge Acquisition. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers. Association for Computational Linguistics, pages 12–16. 10.3115/v1/E14-4003.
[29] Mikolov, T., Chen, K., Corrado, G., Dean, J. . 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
[30] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. In Proceeding of the International Conference on Learning Representations.
[31] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.
[32] Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting Pre-Trained Models for Chinese Natural Language Processing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 657–668. |