參考文獻 |
[1] A. M. Rush, S. Chopra, and J. Weston, "A Neural Attention Model for Abstractive Sentence Summarization," Lisbon, Portugal, September 2015: Association for Computational Linguistics, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 379-389, doi: 10.18653/v1/D15-1044. [Online]. Available: https://aclanthology.org/D15-1044
https://doi.org/10.18653/v1/D15-1044
[2] S. Chopra, M. Auli, and A. M. Rush, "Abstractive Sentence Summarization with Attentive Recurrent Neural Networks," San Diego, California, June 2016: Association for Computational Linguistics, in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 93-98, doi: 10.18653/v1/N16-1012. [Online]. Available: https://aclanthology.org/N16-1012
https://doi.org/10.18653/v1/N16-1012
[3] A. See, P. J. Liu, and C. D. Manning, "Get To The Point: Summarization with Pointer-Generator Networks," Vancouver, Canada, July 2017: Association for Computational Linguistics, in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1073-1083, doi: 10.18653/v1/P17-1099. [Online]. Available: https://aclanthology.org/P17-1099
https://doi.org/10.18653/v1/P17-1099
[4] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
[5] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
[6] Y. Liu and M. Lapata, "Text Summarization with Pretrained Encoders," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 3730-3740.
[7] M. Lewis et al., "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension," arXiv preprint arXiv:1910.13461, 2019.
[8] J. Zhang, Y. Zhao, M. Saleh, and P. J. Liu, "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization," arXiv preprint arXiv:1912.08777, 2019.
[9] W. Qi et al., "ProphetNet: Predicting Future N-gram for Sequence-to-SequencePre-training," in Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp. 2401-2410.
[10] W. Qi et al., "ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation," Online, August 2021: Association for Computational Linguistics, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pp. 232-239, doi: 10.18653/v1/2021.acl-demo.28. [Online]. Available: https://aclanthology.org/2021.acl-demo.28
https://doi.org/10.18653/v1/2021.acl-demo.28
[11] Y. Shao et al., "Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation," arXiv preprint arXiv:2109.05729, 2021.
[12] Z.-Y. Dou, P. Liu, H. Hayashi, Z. Jiang, and G. Neubig, "GSum: A General Framework for Guided Neural Abstractive Summarization," Online, June 2021: Association for Computational Linguistics, in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4830-4842, doi: 10.18653/v1/2021.naacl-main.384. [Online]. Available: https://aclanthology.org/2021.naacl-main.384
https://doi.org/10.18653/v1/2021.naacl-main.384
[13] Y. Zhang, X. Zhang, X. Wang, S.-q. Chen, and F. Wei, "Latent Prompt Tuning for Text Summarization," arXiv preprint arXiv:2211.01837, 2022.
[14] W. Xiao and G. Carenini, "Entity-based spancopy for abstractive summarization to improve the factual consistency," arXiv preprint arXiv:2209.03479, 2022.
[15] A. Ben Abacha and D. Demner-Fushman, "On the Summarization of Consumer Health Questions," Florence, Italy, July 2019: Association for Computational Linguistics, in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2228-2234, doi: 10.18653/v1/P19-1215. [Online]. Available: https://aclanthology.org/P19-1215
https://doi.org/10.18653/v1/P19-1215
[16] G. Zeng et al., "MedDialog: Large-scale Medical Dialogue Datasets," Online, November 2020: Association for Computational Linguistics, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 9241-9250, doi: 10.18653/v1/2020.emnlp-main.743. [Online]. Available: https://aclanthology.org/2020.emnlp-main.743
https://doi.org/10.18653/v1/2020.emnlp-main.743
[17] C. Xu, J. Pei, H. Wu, Y. Liu, and C. Li, "MATINF: A Jointly Labeled Large-Scale Dataset for Classification, Question Answering and Summarization," Online, July 2020: Association for Computational Linguistics, in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3586-3596, doi: 10.18653/v1/2020.acl-main.330. [Online]. Available: https://aclanthology.org/2020.acl-main.330
https://doi.org/10.18653/v1/2020.acl-main.330
[18] S. Yadav, D. Gupta, and D. Demner-Fushman, "Chq-summ: A dataset for consumer healthcare question summarization," arXiv preprint arXiv:2206.06581, 2022.
[19] N. Salek Faramarzi, M. Patel, S. H. Bandarupally, and R. Banerjee, "Context-aware Medication Event Extraction from Unstructured Text," Toronto, Canada, July 2023: Association for Computational Linguistics, in Proceedings of the 5th Clinical Natural Language Processing Workshop, pp. 86-95, doi: 10.18653/v1/2023.clinicalnlp-1.11. [Online]. Available: https://aclanthology.org/2023.clinicalnlp-1.11
https://doi.org/10.18653/v1/2023.clinicalnlp-1.11
[20] N. Chen, X. Su, T. Liu, Q. Hao, and M. Wei, "A benchmark dataset and case study for Chinese medical question intent classification," BMC Medical Informatics and Decision Making, vol. 20, no. 3, pp. 1-7, 2020.
[21] C.-Y. Lin, "ROUGE: A Package for Automatic Evaluation of Summaries," Barcelona, Spain, July 2004: Association for Computational Linguistics, in Text Summarization Branches Out, pp. 74-81. [Online]. Available: https://aclanthology.org/W04-1013. [Online]. Available: https://aclanthology.org/W04-1013
[22] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, "Bertscore: Evaluating text generation with bert," arXiv preprint arXiv:1904.09675, 2019.
[23] Y. Cui, W. Che, T. Liu, B. Qin, and Z. Yang, "Pre-training with whole word masking for chinese bert," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 3504-3514, 2021.
[24] Y. Liu et al., "Roberta: A robustly optimized bert pretraining approach," arXiv preprint arXiv:1907.11692, 2019.
[25] Y. He, Z. Zhu, Y. Zhang, Q. Chen, and J. Caverlee, "Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference and Disease Name Recognition," Online, November 2020: Association for Computational Linguistics, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4604-4614, doi: 10.18653/v1/2020.emnlp-main.372. [Online]. Available: https://aclanthology.org/2020.emnlp-main.372
https://doi.org/10.18653/v1/2020.emnlp-main.372
[26] T. G. Dietterich, "Approximate statistical tests for comparing supervised classification learning algorithms," Neural computation, vol. 10, no. 7, pp. 1895-1923, 1998.
[27] A. H. Bowker, "A test for symmetry in contingency tables," Journal of the american statistical association, vol. 43, no. 244, pp. 572-574, 1948.
[28] Z. Zhao et al., "UER: An Open-Source Toolkit for Pre-training Models," Hong Kong, China, November 2019: Association for Computational Linguistics, in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pp. 241-246, doi: 10.18653/v1/D19-3041. [Online]. Available: https://aclanthology.org/D19-3041
https://doi.org/10.18653/v1/D19-3041
[29] R. Dror, G. Baumer, S. Shlomov, and R. Reichart, "The Hitchhiker’s Guide to Testing Statistical Significance in Natural Language Processing," Melbourne, Australia, July 2018: Association for Computational Linguistics, in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1383-1392, doi: 10.18653/v1/P18-1128. [Online]. Available: https://aclanthology.org/P18-1128
https://doi.org/10.18653/v1/P18-1128
[30] T. Berg-Kirkpatrick, D. Burkett, and D. Klein, "An Empirical Investigation of Statistical Significance in NLP," Jeju Island, Korea, July 2012: Association for Computational Linguistics, in Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 995-1005. [Online]. Available: https://aclanthology.org/D12-1091. [Online]. Available: https://aclanthology.org/D12-1091
[31] B. Efron and R. J. Tibshirani, An introduction to the bootstrap. CRC press, 1994. |