參考文獻 |
[1] D. Zelenko, C. Aone, and A. Richardella, “Kernel Methods for Relation Extraction,” Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 71-78, 2002.
[2] M. Miwa and M. Bansal, “End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures,” Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1105-1116, 2016.
[3] Y. Lin, S. Shen, Z. Liu, H. Luan, and M. Sun, “Neural Relation Extraction with Selective Attention over Instances,” Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2124-2133, 2016.
[4] C. Alt, M. Hübner, and L. Hennig, “Improving Relation Extraction by Pre-trained Language Representations,” ArXiv abs/1906.03088, 2019.
[5] Z. Wei, J. Su, Y. Wang, Y. Tian, and Y. Chang, “A Novel Cascade Binary Tagging Framework for Relational Triple Extraction,” Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1476-1488, 2020.
[6] S. Zheng, F. Wang, H. Bao, Y. Hao, P. Zhou, and B. Xu, “Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme,” Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1227-1236, 2017.
[7] Y. Wang, B. Yu, Y. Zhang, T. Liu, H. Zhu, and L. Sun, “TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking,” Proceedings of the 28th International Conference on Computational Linguistics, pp. 1572-1582, 2020.
[8] Y. Wang, C. Sun, Y. Wu, H. Zhou, L. Li, and J. Yan, “UniRE: A Unified Label Space for Entity Relation Extraction,” Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 220-231, 2021.
[9] D. Ye, Y. Lin, P. Li, and M. Sun, “Packed Levitated Marker for Entity and Relation Extraction,” Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4904-4917, 2022.
[10] J. Zhao, W. Zhan, X. Zhao, Q. Zhang, T. Gui, Z. Wei, J. Wang, M. Peng, and M. Sun, “RE-Matching: A Fine-Grained Semantic Matching Method for Zero-Shot Relation Extraction,” Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6680-6691, 2023.
[11] R. Zhang, Y. Li, and L. Zou, “A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction,” Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 10853-10865, 2023.
[12] Z. Zhong and D. Chen, “A Frustratingly Easy Approach for Entity and Relation Extraction,” Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 50-61, 2021.
[13] D. Rumelhart, G. Hinton, and R. Williams, “Learning representations by back-propagating errors,” Nature 323, pp. 533-536, 1986.
[14] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[15] Y. LeCun, “Generalization and network design strategies,” 1989.
[16] J. Lafferty, A. Callum and F. Pereira, “Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data,” International Conference on Machine Learning, 2001.
[17] C. Cortes and V. Vapnik, “V. Support-vector networks,” Mach Learn 20, pp. 273-297, 1995.
[18] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser, and I. Polosukhin, “Attention is All you Need,” Neural Information Processing Systems, 2017.
[19] J. West, D. Ventura, and S. Warnick, “Spring Research Presentation: A Theoretical Foundation for Inductive Transfer,” Brigham Young University, College of Physical and Mathematical Sciences, pp. 32, 2007.
[20] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” North American Chapter of the Association for Computational Linguistics, 2019.
[21] D. Han, T. Ye, Y. Han, Z. Xia, S. Song, and G. Huang, “Agent Attention: On the Integration of Softmax and Linear Attention,” ArXiv abs/2312.08874, 2023.
[22] S. Riedel, L. Yao, A. McCallum, “Modeling relations and their mentions without labeled text,” Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 148-163, 2010.
[23] C. Gardent, A. Shimorina, S. Narayan, and L. Perez-Beltrachini, “Creating training corpora for nlg micro-planners,” Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 179-188, 2017.
[24] X. Zeng, D. Zeng, S. He, K. Liu, and J. Zhao, “Extracting Relational Facts by an End-to-End Neural Model with Copy Mechanism,” Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 506-514, 2018.
[25] T. Fu, P. Li, and W. Ma, “GraphRel: Modeling Text as Relational Graphs for Joint Entity and Relation Extraction,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1409-1418, 2019.
[26] J. Cheng, T. Zhang, S. Zhang, H. Ren and G. Yu, “A Cascade Dual-Decoder Model for Joint Entity and Relation Extraction,” IEEE Transactions on Emerging Topics in Computational Intelligence, pp. 1-13, 2024.
[27] M. Hearst, “Automatic Acquisition of Hyponyms from Large Text Corpora,” Proceedings of the 14th International Conference on Computational Linguistics, 1992.
[28] I. Sutskever, O. Vinyals, and Q. Le, “Sequence to sequence learning with neural networks,” Proceedings of the 27th International Conference on Neural Information Processing Systems, pp. 3104-3112, 2014.
[29] L. Pratt, “Discriminability-based transfer between neural networks,” Proceedings of the 5th Neural Information Processing Systems, pp. 204-211, 1993.
[30] S. Stephen, “Selecting and interpreting measures of thematic classification accuracy,” Proceedings of Remote Sensing of Environment, pp. 77-89, 1997.
[31] K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition," Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016. |