參考文獻 |
Hanan Aldarmaki and Mona Diab. Context-aware cross-lingual
mapping. arXiv preprint arXiv:1903.03243, 2019.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer
normalization. arXiv preprint arXiv:1607.06450, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural
machine translation by jointly learning to align and translate.
arXiv preprint arXiv:1409.0473, 2014.
Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao,
and Heyan Huang. Cross-lingual natural language generation via
pre-training. arXiv preprint arXiv:1909.10481, 2019.
Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic
Denoyer, and Hervé Jégou. Word translation without parallel
data. arXiv preprint arXiv:1710.04087, 2017.
Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams,
Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov.
Xnli: Evaluating cross-lingual sentence representations. arXiv
preprint arXiv:1809.05053, 2018.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina
Toutanova. Bert: Pre-training of deep bidirectional transformers
for language understanding. arXiv preprint arXiv:1810.04805,
2018.
Sergey Edunov, Alexei Baevski, and Michael Auli. Pre-trained
language model representations for language generation. arXiv
preprint arXiv:1903.09722, 2019.
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and
Yann N Dauphin. Convolutional sequence to sequence learning.
arXiv preprint arXiv:1705.03122, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep
residual learning for image recognition. In Proceedings of the
IEEE conference on computer vision and pattern recognition,
pages 770–778, 2016.
Jeremy Howard and Sebastian Ruder. Universal language
model fine-tuning for text classification. arXiv preprint
arXiv:1801.06146, 2018.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic
optimization. arXiv preprint arXiv:1412.6980, 2014.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-
Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan,Wade
Shen, Christine Moran, Richard Zens, et al. Moses: Open source
toolkit for statistical machine translation. In Proceedings of the
45th annual meeting of the association for computational linguistics
companion volume proceedings of the demo and poster sessions,
pages 177–180, 2007.
Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan,
and Preethi Jyothi. Cross-lingual training for automatic
question generation. arXiv preprint arXiv:1906.02525,
2019.
Guillaume Lample and Alexis Conneau. Cross-lingual language
model pretraining. arXiv preprint arXiv:1901.07291, 2019.
Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries.
In Text summarization branches out, pages 74–81, 2004.
Edward Loper and Steven Bird. Nltk: the natural language toolkit.
arXiv preprint cs/0205028, 2002.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning.
Bilingual word representations with monolingual quality in mind.
In Proceedings of the 1st Workshop on Vector Space Modeling for
Natural Language Processing, pages 151–159, 2015.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff
Dean. Distributed representations of words and phrases and their
compositionality. In Advances in neural information processing
systems, pages 3111–3119, 2013.
Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and
Eneko Agirre. Analyzing the limitations of cross-lingual word
embedding mappings. arXiv preprint arXiv:1906.05407, 2019.
Jessica Ouyang, Boya Song, and Kathleen McKeown. A robust abstractive
system for cross-lingual summarization. In Proceedings
of the 2019 Conference of the North American Chapter of the Association
for Computational Linguistics: Human Language Technologies,
Volume 1 (Long and Short Papers), pages 2025–2031,
2019.
Matthew E Peters,Waleed Ammar, Chandra Bhagavatula, and Russell
Power. Semi-supervised sequence tagging with bidirectional
language models. arXiv preprint arXiv:1705.00108, 2017.
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner,
Christopher Clark, Kenton Lee, and Luke Zettlemoyer.
Deep contextualized word representations. arXiv preprint
arXiv:1802.05365, 2018.
Telmo Pires, Eva Schlinger, and Dan Garrette. How multilingual is
multilingual bert? arXiv preprint arXiv:1906.01502, 2019.
Martin Popel and Ondˇrej Bojar. Training tips for the transformer
model. The Prague Bulletin of Mathematical Linguistics, 110(1):
43–70, 2018.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya
Sutskever. Improving language understanding by generative
pre-training. URL https://s3-us-west-2. amazonaws. com/openaiassets/
researchcovers/languageunsupervised/language understanding
paper. pdf, 2018.
Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. A survey of
cross-lingual word embedding models. Journal of Artificial Intelligence
Research, 65:569–631, 2019.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine
translation of rare words with subword units. arXiv preprint
arXiv:1508.07909, 2015.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence
learning with neural networks. In Advances in neural information
processing systems, pages 3104–3112, 2014.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.
Attention is all you need. In Advances in neural information
processing systems, pages 5998–6008, 2017.
Xiaojun Wan, Huiying Li, and Jianguo Xiao. Cross-language document
summarization based on machine translation quality pre-
diction. In Proceedings of the 48th Annual Meeting of the Association
for Computational Linguistics, pages 917–926. Association
for Computational Linguistics, 2010.
Shijie Wu and Mark Dredze. Beto, bentz, becas: The surprising
cross-lingual effectiveness of bert. arXiv preprint
arXiv:1904.09077, 2019.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron
Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio.
Show, attend and tell: Neural image caption generation with
visual attention. In International conference on machine learning,
pages 2048–2057, 2015.
Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang,
Shaonan Wang, and Chengqing Zong. Ncls: Neural cross-lingual
summarization. arXiv preprint arXiv:1909.00156, 2019. |