參考文獻 |
[1] Y. Liu and M. Lapata, “Text summarization with pretrained encoders,” arXiv preprint
arXiv:1908.08345, 2019.
[2] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov,
and L. Zettlemoyer, “BART: Denoising sequence-to-sequence pre-training for
natural language generation, translation, and comprehension,” in Proceedings of the
58th Annual Meeting of the Association for Computational Linguistics. Online:
Association for Computational Linguistics, Jul. 2020, pp. 7871–7880. [Online].
Available: https://www.aclweb.org/anthology/2020.acl-main.703
[3] J. Maynez, S. Narayan, B. Bohnet, and R. McDonald, “On faithfulness and
factuality in abstractive summarization,” in Proceedings of the 58th Annual
Meeting of the Association for Computational Linguistics. Online: Association
for Computational Linguistics, Jul. 2020, pp. 1906–1919. [Online]. Available:
https://www.aclweb.org/anthology/2020.acl-main.173
[4] Y. Huang, X. Feng, X. Feng, and B. Qin, “The factual inconsistency problem in
abstractive text summarization: A survey,” arXiv preprint arXiv:2104.14839, 2021.
[5] C.-Y. Lin, “ROUGE: A package for automatic evaluation of summaries,”
in Text Summarization Branches Out. Barcelona, Spain: Association for
Computational Linguistics, Jul. 2004, pp. 74–81. [Online]. Available: https:
//www.aclweb.org/anthology/W04-1013
[6] A. Wang, K. Cho, and M. Lewis, “Asking and answering questions to evaluate
the factual consistency of summaries,” in Proceedings of the 58th Annual
Meeting of the Association for Computational Linguistics. Online: Association
for Computational Linguistics, Jul. 2020, pp. 5008–5020. [Online]. Available:
https://www.aclweb.org/anthology/2020.acl-main.450
[7] S. Cao and L. Wang, “Cliff: Contrastive learning for improving faithfulness and
factuality in abstractive summarization,” arXiv preprint arXiv:2109.09209, 2021.
[8] W. Liu, H. Wu, W. Mu, Z. Li, T. Chen, and D. Nie, “Co2sum: Contrastive learning
for factual-consistent abstractive summarization,” arXiv preprint arXiv:2112.01147,
2021.
[9] P. F. Brown, V. J. Della Pietra, P. V. Desouza, J. C. Lai, and R. L. Mercer, “Class-
based n-gram models of natural language,” Computational linguistics, vol. 18, no. 4,
pp. 467–480, 1992.
[10] T. Mikolov, M. Karafiát, L. Burget, J. Cernockỳ, and S. Khudanpur, “Recurrent neu-
ral network based language model.” in Interspeech, vol. 2, no. 3. Makuhari, 2010,
pp. 1045–1048
[11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser,
and I. Polosukhin, “Attention is all you need,” in Advances in neural information
processing systems, 2017, pp. 5998–6008.
[12] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep
bidirectional transformers for language understanding,” in Proceedings of the 2019
Conference of the North American Chapter of the Association for Computational
Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp.
4171–4186. [Online]. Available: https://www.aclweb.org/anthology/N19-1423
[13] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language
models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
[14] Q. Zhou, N. Yang, F. Wei, S. Huang, M. Zhou, and T. Zhao, “Neural
document summarization by jointly learning to score and select sentences,” in
Proceedings of the 56th Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers). Melbourne, Australia: Association
for Computational Linguistics, Jul. 2018, pp. 654–663. [Online]. Available:
https://www.aclweb.org/anthology/P18-1061
[15] Y. Wu and B. Hu, “Learning to extract coherent summary via deep reinforcement
learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32,
no. 1, 2018
[16] N. Moratanch and S. Chitrakala, “A survey on abstractive text summarization,” in
2016 International Conference on Circuit, power and computing technologies (IC-
CPCT). IEEE, 2016, pp. 1–7.
[17] Z. Zhao, S. B. Cohen, and B. Webber, “Reducing quantity hallucinations in
abstractive summarization,” in Findings of the Association for Computational
Linguistics: EMNLP 2020. Online: Association for Computational Linguistics,
Nov. 2020, pp. 2237–2249. [Online]. Available: https://www.aclweb.org/anthology/
2020.findings-emnlp.203
[18] S. Chen, F. Zhang, K. Sone, and D. Roth, “Improving faithfulness in abstractive
summarization with contrast candidate generation and selection,” arXiv preprint
arXiv:2104.09061, 2021.
[19] M. Cao, Y. Dong, J. Wu, and J. C. K. Cheung, “Factual error correction
for abstractive summarization models,” in Proceedings of the 2020 Conference
on Empirical Methods in Natural Language Processing (EMNLP). Online:
Association for Computational Linguistics, Nov. 2020, pp. 6251–6258. [Online].
Available: https://www.aclweb.org/anthology/2020.emnlp-main.506
[20] Y. Dong, S. Wang, Z. Gan, Y. Cheng, J. C. K. Cheung, and J. Liu, “Multi-
fact correction in abstractive text summarization,” in Proceedings of the 2020
Conference on Empirical Methods in Natural Language Processing (EMNLP).
Online: Association for Computational Linguistics, Nov. 2020, pp. 9320–9331.
[Online]. Available: https://www.aclweb.org/anthology/2020.emnlp-mai
[21] F. Nan, R. Nallapati, Z. Wang, C. N. d. Santos, H. Zhu, D. Zhang, K. McKeown,
and B. Xiang, “Entity-level factual consistency of abstractive text summarization,”
arXiv preprint arXiv:2102.09130, 2021.
[22] Z. Cao, F. Wei, W. Li, and S. Li, “Faithful to the original: Fact aware neural ab-
stractive summarization,” in thirty-second AAAI conference on artificial intelligence,
2018.
[23] B. Gunel, C. Zhu, M. Zeng, and X. Huang, “Mind the facts: Knowledge-boosted
coherent abstractive text summarization,” arXiv preprint arXiv:2006.15435, 2020.
[24] S. Welleck, I. Kulikov, S. Roller, E. Dinan, K. Cho, and J. Weston, “Neural text
generation with unlikelihood training,” arXiv preprint arXiv:1908.04319, 2019.
[25] H. Li, J. Zhu, J. Zhang, and C. Zong, “Ensure the correctness of the summary: In-
corporate entailment knowledge into abstractive sentence summarization,” in Pro-
ceedings of the 27th International Conference on Computational Linguistics, 2018,
pp. 1430–1441.
[26] F. Nan, C. N. d. Santos, H. Zhu, P. Ng, K. McKeown, R. Nallapati, D. Zhang,
Z. Wang, A. O. Arnold, and B. Xiang, “Improving factual consistency of abstractive
summarization via question answering,” arXiv preprint arXiv:2105.04623, 2021.
[27] W. Kryscinski, B. McCann, C. Xiong, and R. Socher, “Evaluating the factual
consistency of abstractive text summarization,” in Proceedings of the 2020
Conference on Empirical Methods in Natural Language Processing (EMNLP).
Online: Association for Computational Linguistics, Nov. 2020, pp. 9332–9346.
[Online]. Available: https://www.aclweb.org/anthology/2020.emnlp-main.750
[28] T. Scialom, P.-A. Dray, P. Gallinari, S. Lamprier, B. Piwowarski, J. Staiano, and
A. Wang, “Questeval: Summarization asks for fact-based evaluation,” arXiv preprint
arXiv:2103.12693, 2021.
[29] J. Zhang, Y. Zhao, M. Saleh, and P. Liu, “Pegasus: Pre-training with extracted gap-
sentences for abstractive summarization,” in International Conference on Machine
Learning. PMLR, 2020, pp. 11 328–11 339.
[30] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating
text generation with bert,” arXiv preprint arXiv:1904.09675, 2019.
[31] R. Nallapati, B. Zhou, C. dos Santos, Ç. glar Gulçehre, and B. Xiang, “Abstractive
text summarization using sequence-to-sequence rnns and beyond,” CoNLL 2016, p.
280, 2016.
[32] S. Narayan, S. B. Cohen, and M. Lapata, “Don't give me the details, just the sum-
mary! topic-aware convolutional neural networks for extreme summarization,” in
Proceedings of the 2018 Conference on Empirical Methods in Natural Language
Processing, 2018, pp. 1797–1807.
[33] T. Goyal and G. Durrett, “Annotating and modeling fine-grained factuality in sum-
marization,” arXiv preprint arXiv:2104.04302, 2021 |