參考文獻 |
[1] Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, and Chandan K. Reddy. Neu-
ral abstractive text summarization with sequence-to-sequence models. Trans.
Data Sci., 2(1):1:1–1:37, 2021.
[2] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you
need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish-
wanathan, and R. Garnett, editors, Advances in Neural Information Processing
Systems, volume 30. Curran Associates, Inc., 2017.
[3] Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun,
Meng Liao, and Shaoyi Chen. Text2Event: Controllable sequence-to-structure
generation for end-to-end event extraction. In Proceedings of the 59th Annual
Meeting of the Association for Computational Linguistics and the 11th Inter-
national Joint Conference on Natural Language Processing (Volume 1: Long
Papers), pages 2795–2806, Online, August 2021. Association for Computational
Linguistics.
[4] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo
Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky.
Domain-adversarial training of neural networks. The journal of machine learn-
ing research, 17(1):2096–2030, 2016.
[5] Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Don’t give me the de-
tails, just the summary! Topic-aware convolutional neural networks for extreme
summarization. In Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing, Brussels, Belgium, 2018.
[6] Alexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention
model for abstractive sentence summarization. In Proceedings of the 2015 Con-
ference on Empirical Methods in Natural Language Processing, pages 379–389,
Lisbon, Portugal, September 2015. Association for Computational Linguistics.
[7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT:
Pre-training of deep bidirectional transformers for language understanding. InProceedings of the 2019 Conference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Language Technologies, Volume
1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June
2019. Association for Computational Linguistics.
[8] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya
Sutskever. Language models are unsupervised multitask learners. 2019.
[9] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrah-
man Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART:
Denoising sequence-to-sequence pre-training for natural language generation,
translation, and comprehension. In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics, pages 7871–7880, Online, July
2020. Association for Computational Linguistics.
[10] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of
transfer learning with a unified text-to-text transformer. Journal of Machine
Learning Research, 21(140):1–67, 2020.
[11] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a
method for automatic evaluation of machine translation. In Proceedings of the
40th Annual Meeting of the Association for Computational Linguistics, pages
311–318, Philadelphia, Pennsylvania, USA, July 2002. Association for Compu-
tational Linguistics.
[12] Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In
Text Summarization Branches Out, pages 74–81, Barcelona, Spain, July 2004.
Association for Computational Linguistics.
[13] Yaser Keneshloo, Tian Shi, Naren Ramakrishnan, and Chandan K. Reddy. Deep
reinforcement learning for sequence-to-sequence models. IEEE Transactions on
Neural Networks and Learning Systems, 31(7):2469–2489, 2020.
[14] Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng,
Xuedong Huang, and Meng Jiang. Enhancing factual consistency of abstractive
summarization. In Proceedings of the 2021 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language
Technologies, pages 718–733, Online, June 2021. Association for Computational
Linguistics.
[15] Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang
Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, YisiSang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and
Mark Warschauer. Fantastic questions and where to find them: FairytaleQA –
an authentic dataset for narrative comprehension. In Proceedings of the 60th
Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 447–460, Dublin, Ireland, May 2022. Association for Com-
putational Linguistics.
[16] Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd.
spacy: Industrial-strength natural language processing in python. 2020.
[17] Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi,
Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. Al-
lenNLP: A deep semantic natural language processing platform. In Proceedings
of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6, Melbourne,
Australia, July 2018. Association for Computational Linguistics.
[18] Ramesh Nallapati, Bowen Zhou, Cícero Nogueira dos Santos, Çaglar Gülçehre,
and Bing Xiang. Abstractive text summarization using sequence-to-sequence
rnns and beyond. In Yoav Goldberg and Stefan Riezler, editors, Proceedings
of the 20th SIGNLL Conference on Computational Natural Language Learning,
CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 280–290. ACL, 2016.
[19] Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt,
Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read
and comprehend. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi
Sugiyama, and Roman Garnett, editors, Advances in Neural Information Pro-
cessing Systems 28: Annual Conference on Neural Information Processing Sys-
tems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693–1701,
2015. |