參考文獻 |
[1] Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. Flair: An easy-to-use framework for state-of-the-art nlp. In Pro ceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54–59, 2019.
[2] Alan Akbik, Duncan Blythe, and Roland Vollgraf. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computa tional Linguistics, pages 1638–1649, 2018.
[3] Silvio Amir, Byron C Wallace, Hao Lyu, and Paula Carvalho Ma´rio J Silva. Modelling context with user embeddings for sarcasm detection in social media. arXiv preprint arXiv:1607.00976, 2016.
[4] Francesco Barbieri and Horacio Saggion. Modelling irony in twitter. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 56–64, 2014.
[5] Konstantin Buschmeier, Philipp Cimiano, and Roman Klinger. An impact analysis of features in a classification approach to irony detection in product reviews. In Proceed ings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 42–49, 2014.
[6] Paula Carvalho, Lu´ıs Sarmento, Ma´rio J Silva, and Eug´enio De Oliveira. Clues for detecting irony in user-generated contents: oh...!! it’s” so easy”;-. In Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion, pages 53–56, 2009. 35
[7] Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase represen tations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
[8] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019.
[9] Dmitry Davidov, Oren Tsur, and Ari Rappoport. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the fourteenth con ference on computational natural language learning, pages 107–116. Association for Computational Linguistics, 2010.
[10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[11] Aniruddha Ghosh and Tony Veale. Fracking sarcasm using neural net work. In Pro ceedings of the 7th workshop on computational approaches to subjectivity, sentiment and social media analysis, pages 161–169, 2016.
[12] Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words. In proceedings of the 2015 conference on empirical methods in natural language processing, pages 1003– 1012, 2015.
[13] Roberto Gonz´alez-Iba´nez, Smaranda Muresan, and Nina Wacholder. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the 36 Association for Computational Linguistics: Human Language Technologies: Short Papers-Volume 2, pages 581–586. Association for Computational Linguistics, 2011.
[14] Jeremy Howard and Sebastian Ruder. Fine-tuned language models for text classifi cation. ArXiv, abs/1801.06146, 2018.
[15] Suzana Ili´c, Edison Marrese-Taylor, Jorge A Balazs, and Yutaka Matsuo. Deep contextualized word representations for detecting sarcasm and irony. arXiv preprint arXiv:1809.09795, 2018.
[16] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77, 2020.
[17] Renuka Joshi. Accuracy, precision, recall f1 score: Interpretation of performance measures, Sep 2016.
[18] Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759, 2016.
[19] WenWei Kang. 2019-nlp: Xlnet, Jul 2019.
[20] Anupam Khattri, Aditya Joshi, Pushpak Bhattacharyya, and Mark Carman. Your sentiment precedes you: Using an author’s historical tweets to predict sarcasm. In Proceedings of the 6th workshop on computational approaches to subjectivity, senti ment and social media analysis, pages 25–30, 2015. [21] Roger J Kreuz and Sam Glucksberg. How to be sarcastic: The echoic reminder theory of verbal irony. Journal of experimental psychology: General, 118(4):374, 1989. 37
[22] Sachi Kumon-Nakamura, Sam Glucksberg, and Mary Brown. How about another piece of pie: The allusional pretense theory of discourse irony. Journal of Experi mental Psychology: General, 124(1):3, 1995.
[23] Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
[24] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language repre sentations. arXiv preprint arXiv:1909.11942, 2019.
[25] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
[26] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[27] Rishabh Misra and Prahal Arora. Sarcasm detection using hybrid neural network. arXiv preprint arXiv:1908.07414, 2019.
[28] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
[29] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018. 38
[30] Soujanya Poria, Erik Cambria, Devamanyu Hazarika, and Prateek Vij. A deeper look into sarcastic tweets using deep convolutional neural networks. arXiv preprint arXiv:1610.08815, 2016.
[31] Rolandos Potamias, Georgios Siolas, and Andreas Stafylopatis. A transformer-based approach to irony and sarcasm detection. 11 2019.
[32] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training.
[33] Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. Sarcasm detection on twit ter: A behavioral modeling approach. WSDM 2015 - Proceedings of the 8th ACM International Conference on Web Search and Data Mining, pages 97–106, 02 2015.
[34] Antonio Reyes, Paolo Rosso, and Davide Buscaldi. From humor recognition to irony detection: The figurative language of social media. Data & Knowledge Engineering, 74:1–12, 2012.
[35] Antonio Reyes, Paolo Rosso, and Tony Veale. A multidimensional approach for detecting irony in twitter. Language Resources and Evaluation, 47, 03 2013.
[36] Chi Sun, Luyao Huang, and Xipeng Qiu. Utilizing bert for aspect-based sentiment analysis via constructing auxiliary sentence. arXiv preprint arXiv:1903.09588, 2019.
[37] Joseph Tepperman, David Traum, and Shrikanth Narayanan. ” yeah right”: Sarcasm recognition for spoken dialogue systems. In Ninth international conference on spoken language processing, 2006.
[38] Akira Utsumi. Verbal irony as implicit display of ironic environment: Distinguishing ironic utterances from nonirony. Journal of Pragmatics, 32(12):1777–1806, 2000. 39
[39] Cynthia Van Hee, Els Lefever, and V´eronique Hoste. Semeval-2018 task 3: Irony detection in english tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 39–50, 2018.
[40] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, L ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
[41] Byron C Wallace, Eugene Charniak, et al. Sparse, contextually informed models for irony detection: Exploiting user communities, entities and sentiment. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1035–1044, 2015.
[42] Byron C Wallace, Laura Kertz, Eugene Charniak, et al. Humans require context to infer ironic intent (so computers probably do, too). In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 512–516, 2014.
[43] Chuhan Wu, Fangzhao Wu, Sixing Wu, Junxin Liu, Zhigang Yuan, and Yongfeng Huang. Thu ngn at semeval-2018 task 3: Tweet irony detection with densely con nected lstm and multi-task learning. In Proceedings of The 12th International Work shop on Semantic Evaluation, pages 51–56, 2018.
[44] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language under standing. In Advances in neural information processing systems, pages 5754–5764, 2019. 40 [45] Shanshan Yu, Jindian Su, and Da Luo. Improving bert-based text classification with auxiliary sentence and domain knowledge. IEEE Access, 7:176600–176612, 2019. [46] Meishan Zhang, Yue Zhang, and Guohong Fu. Tweet sarcasm detection using deep neural network. In Proceedings of COLING 2016, The 26th International Conference on Computational Linguistics: Technical Papers, pages 2449–2460, 2016.
[47] Shiwei Zhang, Xiuzhen Zhang, Jeffrey Chan, and Paolo Rosso. Irony detection via sentiment-based transfer learning. Information Processing & Management, 56(5):1633–1644, 2019. |