參考文獻 |
張弛、毅航、Conrad、龍心塵(2019)。BERT 大火却不懂 Transformer?读这一篇就够了。取自2019年7月16日,http://www.6aiq.com/article/1547650238532?p=1&m=0
張昇暉(2017)。中文文件串流之摘要擷取研究。國立中央大學資訊管理研究所未出版碩士論文,台灣,桃園。
蔡汶霖(2018)。 以詞向量模型增進基於遞歸神經網路之中文文字摘要系統效能。國立中央大學資訊管理研究所未出版碩士論文,台灣,桃園。
謝育倫、劉士弘、陳冠宇、王新民、許聞廉、陳柏琳(2016)。運用序列到序列生成架構於重寫式自動摘要。第28屆自然語言與語音處理研討會(ROCLING 2016),台灣,台南市。
Ayana, Shen, S., Liu, Z., & Sun, M. (2016). Neural headline generation with minimum risk training. Retrieved June 14, 2019, from https://www.researchgate.net/publication/301878995_Neural_Headline_Generation_with_Minimum_Risk_Training
Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. Proceeding of the International Conference on Learning Representations 2015 (ICLR 2015), San Diego, CA.
Bengio, Y., Ducharme, R., Vincent, P., & Jauvin, C. (2003). A neural probabilistic language model. Journal of Machine Learning Research, 3(6). 1137-1155.
Cao, Z., Li, W., Li, S., Wei, F., & Li, Y. (2016). AttSum: Joint learning of focusing and summarization with neural attention. Proceeding of the 26th International Conference on Computational Linguistics: Technical Papers (COLING 2016), Osaka, Japan
Chen, Q., Zhu, X., Ling, Z., Wei, S., & Jiang, H. (2016). Distraction-based neural networks for modeling documents. Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI 2016), Palo Alto, California, USA.
Chen, Y., Chen, B., & Wang, H. (2009). A probabilistic generative framework for extractive broadcast news speech summarization. IEEE Transactions on Audio, Speech, and Language Processing, 17(1), 95-106
Chopra, S., Auli, M., & Rush, A. M. (2016). Abstractive sentence summarization with attentive recurrent neural networks. Proceeding of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, California
Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., & Kuksa, P. (2011). Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12, 2493-2537
Conneau, A., Kiela, D., Schwenk, H., Barrault, L. & Bordes, A. (2017). Supervised learning of universal sentence representations from natural language inference data. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark
Conroy, J. M. & O’leary, D. P. (2001). Text summarization via hidden markov models. Proceeding of the 24th annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR 2001), New Orleans, Louisiana, USA.
Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the Association for Information Science and Technology (JAIST), 41(6), 391-407.
Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. Retrieved June 14, 2019, from https://arxiv.org/pdf/1810.04805.pdf
Gong, Y., & Liu, X. (2001). Generic text summarization using relevance measure and latent semantic analysis. Proceeding of the 24th annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR 2001), New Orleans, Louisiana, United States.
Gu, J., Lu, Z., Li, H., & Li, V. O. K. (2016). Incorporating copying mechanism in sequence-to-sequence learning. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), Berlin, Germany.
Hinton, G. E. (1986). Learning distributed representations of concepts. Proceedings of the Eighth Annual Conference of the Cognitive Science Society. Amherst, Massachusetts.
Hofmann, T. (1999). Probabilistic latent semantic indexing. Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR 1999), Berkeley, California, USA.
Hou, Y., Xiang, Y., Tang, B., Chen, Q., Wang, X., & Zhu, F. (2017). Identifying high quality document–summary pairs through text matching. Information , 8(2), 64-84
Hu, B., Chen, Q., & Zhu, F. (2015). LCSTS: A large scale chinese short text summarization dataset. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal.
Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2017). Bag of tricks for efficient text classification. Proc eeding of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain
Klein, G., Kim, Y., Deng, Y., Senellart, J., & Rush, A. M. (2017). OpenNMT: Open-source toolkit for neural machine translation. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), Vancouver, Canada.
Kupiec, J., Pedersen, J., & Chen, F. (1995). A trainable document summarizer. Proceeding of the 18th annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR 1995), Seattle, Washington, USA.
Li, P., Bing, L., & Lam, W. (2018). Actor-critic based training framework for abstractive summarization. Retrieved June 14, 2019, from https://arxiv.org/pdf/1803.11070.pdf
Li, P., Lam, W., Bing, L., & Wang, Z. (2017). Deep recurrent generative decoder for abstractive text summarization. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing(EMNLP), Copenhagen, Denmark.
Lin, C. (2004). ROUGE: A package for automatic evaluation of summaries. Proceedings of Association for Computational Linguistics (ACL 2004), Barcelona, Spain.
Luong, M., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal.
Ma, S., & Sun, X. (2017). A semantic relevance based neural network for text summarization and text simplification. Retrieved June 14, 2019, from https://arxiv.org/pdf/1710.02318.pdf
Ma, S., Sun, X., Li, W., Li, S., Li, W., & Ren, X. (2018). Word embedding attention network: Generating words by querying distributed word representations for paraphrase generation. Retrieved June 14, 2019, from https://arxiv.org/pdf/1803.01465v1.pdf
Mihalcea, R., & Tarau, P. (2004). TextRank: Bringing order into texts. Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP 2004), Barcelona, Spain.
Mikolov, T., Chen, K., Corrado, G., & Dean, D. (2013). Efficient estimation of word representations in vector space. Proceedings of the International Conference on Learning Representations (ICLR 2013), Scottsdale, Arizona, USA
Mikolov, T., Karafiat, M., Burget, L., Cernocky, J. H., & Khudanpur, S. (2010). Recurrent neural network based language model. Proceeding of the11th Annual Conference of the International Speech Communication Association (INTERSPEECH 2010), Makuhari, Chiba, Japan
Osborne, M. (2002). Using maximum entropy for sentence extraction. Proceeding of the Workshop on Automatic Summarization (including DUC 2002), Philadelphia.
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. Proceeding of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (AACL-HLT 2018), New Orleans, Louisiana.
Rush, A. M., Chopra, S., & Weston, J. (2015). A neural attention model for abstractive sentence summarization. Proceeding of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), Lisbon, Portugal.
Shen, D., Sun, J., Li, H., Yang, Q., & Chen, Z. (2007). Document summarization using conditional random fields. Proceeding of the 20th international joint conference on Artifical intelligence (IJCAI 2007), Hyderabad, India
Sun, X., Wei, B., Ren, X., & Ma, S. (2018). Label embedding network: Learning label representation for soft training of deep networks. Proceedings of the 6th International Conference on Learning Representations (ICLR 2018), Vancouver, Canada.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. Proceeding of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, California, USA
Wang, L., Yao, J., Tao, Y., Zhong,L., Liu, W., & Du, Q. (2018). A reinforced topic-aware convolutional sequence-to-sequence model for abstractive text summarization. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI 2018), Stockholm, Sweden.
Xu, H., Cao, Y., Shang, Y., Liu, Y., Tan, J., & Guo, L. (2018). Adversarial reinforcement learning for chinese text summarization. Proceedings of the 18th International Conference (ICCS 2018), Wuxi, China.
Yang, W., Tang, Z., & Tang, X. (2018). A hierarchical neural abstractive summarization with self-attention mechanism. Proceedings of the 3rd International Conference on Automation, Mechanical Control and Computational Engineering (AMCCE 2018), Dalian, China.
Yin, J., Jiang, X., Lu, Z., Shang, L., Li, H., & Li, X. (2016). Neural generative question answering. Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI 2016), San Diego, California.
Yin, W., & Pei, Y. (2015). Optimizing sentence modeling and selection for document summarization. Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI 2015), Buenos Aires, Argentina.
Zhuang, H., Wang, C., Li, C., Li, Y., Wang, Q., & Zhou, X. (2018). Chinese language processing based on stroke representation and multi dimensional representation. IEEE Access, 6, 41928-41941.
|