參考文獻 |
[1] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,” J. Mach. Learn. Res., vol. 12, no. null, p. 2493‘‘2537, Nov. 2011.
[2] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “Squad: 100,000+ questions for machine comprehension of text,” Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016. [Online]. Available: http://dx.doi.org/10.18653/v1/D16-1264
[3] T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng, “MS MARCO: A human generated machine reading comprehension dataset,” in Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016, ser. CEUR Workshop Proceedings, T. R. Besold, A. Bordes, A. S. d’Avila Garcez, and G. Wayne, Eds., vol. 1773. CEUR-WS.org, 2016. [Online]. Available: http://ceur-ws.org/Vol-1773/CoCoNIPS 2016 paper9.pdf
[4] A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman, “Newsqa: A machine comprehension dataset,” Proceedings of the 2nd Workshop on Representation Learning for NLP, 2017. [Online]. Available: http://dx.doi.org/10.18653/v1/W17-2623
[5] K. Sun, D. Yu, J. Chen, D. Yu, Y. Choi, and C. Cardie, “Dream: A challenge data set and models for dialogue-based reading comprehension,” Transactions of the Association for Computational Linguistics, vol. 7, p. 217‘‘231, Mar 2019. [Online]. Available: http://dx.doi.org/10.1162/tacl a 00264
[6] G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy, “Race: Largescale reading comprehension dataset from examinations,” Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017. [Online]. Available: http://dx.doi.org/10.18653/v1/D17-1082
[7] H. Zhu, F. Wei, B. Qin, and T. Liu, “Hierarchical attention flow for multiple-choice reading comprehension,” in AAAI, 2018.
[8] S. Wang, M. Yu, J. Jiang, and S. Chang, “A co-matching model for multi-choice reading comprehension,” Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2018. [Online]. Available: http://dx.doi.org/10.18653/v1/P18-2118
[9] K. Sun, D. Yu, D. Yu, and C. Cardie, “Improving machine reading comprehension with general reading strategies,” Proceedings of the 2019 Conference of the North, 2019. [Online]. Available: http://dx.doi.org/10.18653/v1/N19-1270
[10] S. Zhang, H. Zhao, Y. Wu, Z. Zhang, X. Zhou, and X. Zhou, “Dual co-matching network for multichoice reading comprehension.” CoRR, vol. abs/1901.09381, 2019. [Online]. Available: http://dblp.uni-trier.de/db/journals/ corr/corr1901.html#abs-1901-09381
[11] S. Zhang, H. Zhao, Y. Wu, Z. Zhang, X. Zhou, and X. Zhou, “DCMN+: Dual co-matching network for multi-choice reading comprehension, in AAAI. AAAI Press, 2020, pp. 9563–9570. [Online]. Available: http://dblp.uni-trier.de/db/conf/ aaai/aaai2020.html#ZhangZW0ZZ20
[12] Q. Ran, P. Li, W. Hu, and J. Zhou, “Option comparison network for multiple-choice reading comprehension,” arXiv preprint arXiv:1903.03033, 2019.
[13] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” 2014, cite arxiv:1409.0473Comment: Accepted at ICLR 2015 as oral presentation. [Online]. Available: http://arxiv.org/abs/1409.0473
[14] P. Zhu, H. Zhao, and X. Li, “Dual multi-head co-attention for multi-choice reading comprehension,” arXiv preprint arXiv:2001.09415, 2020.
[15] J. Howard and S. Ruder, “Universal language model fine-tuning for text classification,” arXiv preprint arXiv:1801.06146, 2018
[16] M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2018. [Online]. Available: http://dx.doi.org/10.18653/ v1/N18-1202
[17] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding by generative pre-training,” 2018.
[18] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding.” in NAACL-HLT (1), J. Burstein, C. Doran, and T. Solorio, Eds. Association for Computational Linguistics, 2019, pp. 4171–4186. [Online]. Available: http://dblp.uni-trier. de/db/conf/naacl/naacl2019-1.html#DevlinCLT19
[19] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008.
[20] D. Chen, J. Bolton, and C. D. Manning, “A thorough examination of the cnn/daily mail reading comprehension task,” Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2016. [Online]. Available: http://dx.doi.org/10.18653/v1/P16-1223
[21] B. Dhingra, H. Liu, Z. Yang, W. Cohen, and R. Salakhutdinov, “Gated-attention readers for text comprehension,” Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2017. [Online]. Available: http://dx.doi.org/10.18653/v1/P17-1168
[22] H. Zhu, F. Wei, B. Qin, and T. Liu, “Hierarchical attention flow for multiple-choice reading comprehension,” 2018. [Online]. Available: https://www.aaai.org/ocs/index.php/AAAI/AAAI18/ paper/view/16331
[23] P.-H. Li, T.-J. Fu, and W.-Y. Ma, “Why attention? analyze bilstm deficiency and its remedies in the case of ner,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 05, p. 8236‘‘8244, Apr 2020.
[24] R. K. Srivastava, K. Greff, and J. Schmidhuber, “Training very deep networks,” in Advances in neural information processing systems, 2015, pp. 2377–2385.
[25] “Nationwide junior high and primary school question bank website,” https://exam.naer.edu.tw/ Accessed July 21, 2020. |