參考文獻 |
Bi, K., Jha, R., Croft, B., & Celikyilmaz, A. (2021, April). AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization.Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume Online.
Cui, Y., Che, W., Liu, T., Qin, B., Wang, S., & Hu, G. (2020, November). Revisiting Pre-Trained Models for Chinese Natural Language Processing.Findings of the Association for Computational Linguistics: EMNLP 2020 Online.
Cui, Y., Che, W., Liu, T., Qin, B., & Yang, Z. (2021). Pre-Training With Whole Word Masking for Chinese BERT. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 3504-3514. https://doi.org/10.1109/TASLP.2021.3124365
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019, June). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) Minneapolis, Minnesota.
Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap. CRC press.
Hermann, K. M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., & Blunsom, P. (2015). Teaching machines to read and comprehend. Advances in neural information processing systems, 28.
Hung, S.-S., Huang, H.-H., & Chen, H.-H. (2020, July). A Complete Shift-Reduce Chinese Discourse Parser with Robust Dynamic Oracle.Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics Online.
Jia, R., Cao, Y., Tang, H., Fang, F., Cao, C., & Wang, S. (2020, November). Neural Extractive Summarization with Hierarchical Attentive Heterogeneous Graph Network.Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) Online.
Lee, L. H., & Lu, Y. (2021). Multiple Embeddings Enhanced Multi-Graph Neural Networks for Chinese Healthcare Named Entity Recognition. IEEE Journal of Biomedical and Health Informatics, 25(7), 2801-2810. https://doi.org/10.1109/JBHI.2020.3048700
Lin, C.-Y. (2004, July). ROUGE: A Package for Automatic Evaluation of Summaries.Text Summarization Branches Out Barcelona, Spain.
Liu, Y., & Lapata, M. (2019, November). Text Summarization with Pretrained Encoders.Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) Hong Kong, China.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Mihalcea, R., & Tarau, P. (2004, July). TextRank: Bringing Order into Text.Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing Barcelona, Spain.
Nallapati, R., Zhai, F., & Zhou, B. (2017). SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10958
Narayan, S., Cohen, S. B., & Lapata, M. (2018, June). Ranking Sentences for Extractive Summarization with Reinforcement Learning.Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) New Orleans, Louisiana.
Petar, V., Guillem, C., Arantxa, C., Adriana, R., Pietro, L., & Yoshua, B. (2018). Graph Attention Networks International Conference on Learning Representations, https://openreview.net/forum?id=rJXMpikCZ ,
Sandhaus, E. (2008). The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12).
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Wang, H., Liu, C., Xi, N., Qiang, Z., Zhao, S., Qin, B., & Liu, T. (2023). Huatuo: Tuning llama model with chinese medical knowledge. arXiv preprint arXiv:2304.06975.
Zhao, M., Yan, S., Liu, B., Zhong, X., Hao, Q., Chen, H., Niu, D., Long, B., & Guo, W. (2021). QBSUM: A large-scale query-based document summarization dataset from real-world applications. Computer Speech & Language, 66, 101166. https://doi.org/https://doi.org/10.1016/j.csl.2020.101166
Zhong, M., Liu, P., Chen, Y., Wang, D., Qiu, X., & Huang, X. (2020, July). Extractive Summarization as Text Matching.Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics Online.
Zhou, Q., Yang, N., Wei, F., Huang, S., Zhou, M., & Zhao, T. (2018, July). Neural Document Summarization by Jointly Learning to Score and Select Sentences.Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Melbourne, Australia. |