參考文獻 |
Bahdanau, D., Cho, K., & Bengio, Y. (2016). Neural Machine Translation by Jointly Learning to Align and Translate. ArXiv:1409.0473 [Cs, Stat]. http://arxiv.org/abs/1409.0473
Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching Word Vectors with Subword Information. ArXiv:1607.04606 [Cs]. http://arxiv.org/abs/1607.04606
Chang, C.-T., Huang, C.-C., Yang, C.-Y., & Hsu, J. Y.-J. (2018). A Hybrid Word-Character Approach to Abstractive Summarization. ArXiv:1802.09968 [Cs]. http://arxiv.org/abs/1802.09968
Chen, Q., Zhu, X., Ling, Z., Wei, S., & Jiang, H. (2016). Distraction-Based Neural Networks for Document Summarization. ArXiv:1610.08462 [Cs]. http://arxiv.org/abs/1610.08462
Chen, X., Xu, L., Liu, Z., Sun, M., & Luan, H. (2015). Joint learning of character and word embeddings. Proceedings of the 24th International Conference on Artificial Intelligence, 1236–1242.
Christian, H., Agus, M. P., & Suhartono, D. (2016). Single Document Automatic Text Summarization using Term Frequency-Inverse Document Frequency (TF-IDF). ComTech: Computer, Mathematics and Engineering Applications, 7(4), 285–294. https://doi.org/10.21512/comtech.v7i4.3746
Chuang, W. T., & Yang, J. (2000). Extracting sentence segments for text summarization: A machine learning approach. Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 152–159. https://doi.org/10.1145/345508.345566
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Duan, X., Yu, H., Yin, M., Zhang, M., Luo, W., & Zhang, Y. (2019). Contrastive Attention Mechanism for Abstractive Sentence Summarization. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3044–3053. https://doi.org/10.18653/v1/D19-1301
Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179–211. https://doi.org/10.1016/0364-0213(90)90002-E
Gu, J., Lu, Z., Li, H., & Li, V. O. K. (2016). Incorporating Copying Mechanism in Sequence-to-Sequence Learning. ArXiv:1603.06393 [Cs]. http://arxiv.org/abs/1603.06393
Hochreiter, S., & Schmidhuber, J. (1997). Long Short-term Memory. Neural Computation, 9, 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
Hu, B., Chen, Q., & Zhu, F. (2016). LCSTS: A Large Scale Chinese Short Text Summarization Dataset. ArXiv:1506.05865 [Cs]. http://arxiv.org/abs/1506.05865
Kaibi, I., Nfaoui, E. H., & Satori, H. (2019). A Comparative Evaluation of Word Embeddings Techniques for Twitter Sentiment Analysis. 2019 International Conference on Wireless Technologies, Embedded and Intelligent Systems (WITS), 1–4. https://doi.org/10.1109/WITS.2019.8723864
Kedzie, C., McKeown, K., & Daume III, H. (2019). Content Selection in Deep Learning Models of Summarization. ArXiv:1810.12343 [Cs]. http://arxiv.org/abs/1810.12343
Kilimci, Z. H., & Akyokuş, S. (2019). The Evaluation of Word Embedding Models and Deep Learning Algorithms for Turkish Text Classification. 2019 4th International Conference on Computer Science and Engineering (UBMK), 548–553. https://doi.org/10.1109/UBMK.2019.8907027
Kim, Y. (2014). Convolutional Neural Networks for Sentence Classification. ArXiv:1408.5882 [Cs]. http://arxiv.org/abs/1408.5882
Klein, G., Kim, Y., Deng, Y., Senellart, J., & Rush, A. M. (2017). OpenNMT: Open-Source Toolkit for Neural Machine Translation. ArXiv:1701.02810 [Cs]. http://arxiv.org/abs/1701.02810
Lin, C.-Y. (2004). ROUGE: A Package for Automatic Evaluation of Summaries. Text Summarization Branches Out, 74–81. https://www.aclweb.org/anthology/W04-1013
Lin, J., Sun, X., Ma, S., & Su, Q. (2018). Global Encoding for Abstractive Summarization. ArXiv:1805.03989 [Cs]. http://arxiv.org/abs/1805.03989
Liu, Y. (2019). Fine-tune BERT for Extractive Summarization. ArXiv:1903.10318 [Cs]. http://arxiv.org/abs/1903.10318
Liu, Y., & Lapata, M. (2019). Text Summarization with Pretrained Encoders. ArXiv:1908.08345 [Cs]. http://arxiv.org/abs/1908.08345
Luong, M.-T., Pham, H., & Manning, C. D. (2015). Effective Approaches to Attention-based Neural Machine Translation. ArXiv:1508.04025 [Cs]. http://arxiv.org/abs/1508.04025
Ma, S., Sun, X., Lin, J., & Wang, H. (2018). Autoencoder as Assistant Supervisor: Improving Text Representation for Chinese Social Media Text Summarization. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 725–731. https://doi.org/10.18653/v1/P18-2115
Mihalcea, R., & Tarau, P. (2004). TextRank: Bringing Order into Text. Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 404–411. https://www.aclweb.org/anthology/W04-3252
Nallapati, R., Zhou, B., santos, C. N. dos, Gulcehre, C., & Xiang, B. (2016). Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond. ArXiv:1602.06023 [Cs]. http://arxiv.org/abs/1602.06023
Nenkova, A., & Vanderwende, L. (2005). The impact of frequency on summarization.
Rush, A. M., Chopra, S., & Weston, J. (2015). A Neural Attention Model for Abstractive Sentence Summarization. ArXiv:1509.00685 [Cs]. http://arxiv.org/abs/1509.00685
Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks. ArXiv:1409.3215 [Cs]. http://arxiv.org/abs/1409.3215
Tas, O., & Kiyani, F. (2017). A SURVEY AUTOMATIC TEXT SUMMARIZATION. PressAcademia Procedia, 5(1), 205–213. https://doi.org/10.17261/Pressacademia.2017.591
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention Is All You Need. ArXiv:1706.03762 [Cs]. http://arxiv.org/abs/1706.03762
Wang, L., Yao, J., Tao, Y., Zhong, L., Liu, W., & Du, Q. (2018). A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 4453–4460. https://doi.org/10.24963/ijcai.2018/619
Wei, B., Ren, X., Sun, X., Zhang, Y., Cai, X., & Su, Q. (2018). Regularizing Output Distribution of Abstractive Chinese Social Media Text Summarization for Improved Semantic Consistency. ArXiv:1805.04033 [Cs]. http://arxiv.org/abs/1805.04033
Zhou, Q., Yang, N., Wei, F., & Zhou, M. (2017). Selective Encoding for Abstractive Sentence Summarization. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1095–1104. https://doi.org/10.18653/v1/P17-1101
張昇暉(2017)。中文文件串流之摘要擷取研究。國立中央大學資訊管理研究所碩士論文,桃園市。
楊佩臻(2013)。利用文句關係網路自動萃取文件摘要之研究。國立中央大學資訊管理研究所碩士論文,桃園市。
王美淋(2020)。結合擷取式與萃取式兩段式模型以增進摘要效能之研究。國立中央大學資訊管理研究所碩士論文,桃園市。
王蓮淨(2015)。以主題事件追蹤為基礎之摘要擷取。國立中央大學資訊管理研究所碩士論文,桃園市。
蔡汶霖(2018)。以詞向量模型增進基於遞歸神經網路之中文文字摘要系統效能。國立中央大學資訊管理研究所碩士論文,桃園市。
陳俞琇(2019)。具擷取及萃取能力的摘要模型。國立中央大學資訊管理研究所碩士論文,桃園市。
麥嘉芳(2019)。基於注意力機制之詞向量中文萃取式摘要研究。國立中央大學資訊管理研究所碩士論文,桃園市。
黃嘉偉(2014)。以文句網路分群架構萃取多文件摘要。國立中央大學資訊管理研究所碩士論文,桃園市。 |