博碩士論文 107522028 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:52 、訪客IP:18.224.59.231
姓名 陳筱萱(Hsiao-Hsuan Chen)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱
(Evaluation of Stacked Embeddings for Sarcasm Detection Using Auxiliary Sentence)
相關論文
★  Dynamic Overlay Construction for Mobile Target Detection in Wireless Sensor Networks★ 車輛導航的簡易繞路策略
★ 使用傳送端電壓改善定位★ 利用車輛分類建構車載網路上的虛擬骨幹
★ Why Topology-based Broadcast Algorithms Do Not Work Well in Heterogeneous Wireless Networks?★ 針對移動性目標物的有效率無線感測網路
★ 適用於無線隨意網路中以關節點為基礎的分散式拓樸控制方法★ A Review of Existing Web Frameworks
★ 將感測網路切割成貪婪區塊的分散式演算法★ 無線網路上Range-free的距離測量
★ Inferring Floor Plan from Trajectories★ An Indoor Collaborative Pedestrian Dead Reckoning System
★ Dynamic Content Adjustment In Mobile Ad Hoc Networks★ 以影像為基礎的定位系統
★ 大範圍無線感測網路下分散式資料壓縮收集演算法★ 車用WiFi網路中的碰撞分析
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著社交媒體的發展,網路上充斥著大量的訊息。為了能夠快速地分析這些大量訊息中的正面及負面情緒,情緒分析已成為自然語言處理中重要的議題。在不同類型的情感分析中,諷刺偵測尤其扮演著極重要的角色。因為當一個句子含有諷刺意味時,其表面含義將會與其核心表達意思相反。為了避免錯誤的判斷,本研究提出了兩種提高諷刺偵測準確度的技術:構建輔助句與堆疊多個嵌入。在構建輔助句的部分,我們提出了兩種方法: AUX-Q和AUX-POSNEG。而堆疊多個嵌入的部分,我們將Transformer-based Embeddings和Static Embeddings堆疊合併成一個新的嵌入。我們在兩個諷刺偵測資料集上實驗我們提出的技術。結果表明,我們提出的方法分別在SemEval 2018 Task 3資料集與News Headlines資料集上,將目前最先進方法的錯誤率降低了7.23%和33.68%。
摘要(英) With the development of social media, the Internet is full of messages. In or der to quickly analyze the positive and negative emotions in a large number of messages, sentiment analysis has become an important issue in natural language processing. Among different types of sentiment analyses, sarcasm detection plays an important role. When a sentence is sarcastic, its meaning will be opposite to that of the core expression. In order to avoid wrong judgments, this research proposes two techniques to improve the accuracy of sarcasm detection: Auxiliary Sentence Construction and Stacking Multiple Embeddings. For the construction of auxiliary sentences, two methods are proposed: AUX-Q and AUX-POSNEG. For stacking multiple embeddings, Transformer-based Embeddings and Static Embeddings are combined into a new embedding. We experimented our proposed techniques on two sarcasm detection datasets. The results showed that our proposed methods reduce the error rate of the state-of-the-art sarcasm detection by 7.23% on SemEval 2018 Task 3 dataset and 33.68% on News Headlines dataset.
關鍵字(中) ★ 諷刺偵測
★ 詞嵌入堆疊
★ 輔助句子
關鍵字(英) ★ Sarcasm Detection
★ Embedding Stacking
★ Auxiliary Sentence
論文目次 1 Introduction p.1
2 RelatedWork p.3
2.1 Approaches based on unexpectedness and contradictory factors p.3
2.2 Content-based approaches p.4
2.3 Deep Learning approaches p.5
2.4 Two-Stage approaches p.5
3 Preliminary p.7
3.1 Static Embedding p.7
3.1.1 Word2Vec p.7
3.1.2 Global Vectors for Word Representation (GloVe) p.8
3.1.3 FastText p.9
3.2 Dynamic Embedding p.9
3.2.1 Embeddings from Language Models (ELMo) p.9
3.2.2 Bidirectional Encoder Representations from Transformers (BERT) p.10
3.2.3 A LITE BERT (ALBERT) p.12
3.2.4 XLNet p.12
3.2.5 Robustly Optimized BERT Approach (RoBERTa) p.14
3.2.6 FLAIR Embedding p.15
4 Design p.17
4.1 Dataset and Data Pre-processing p.17
4.2 Proposed Method p.18
4.2.1 Constructing Auxiliary Sentence p.18
4.2.2 Stacking Multiple Embeddings p.21
5 Performance p.24
5.1 Segmentation of Datasets and Experimental Setup p.24
5.2 Evaluation Metric p.25
5.3 Experimental Results p.27
5.3.1 SemEval 2018 Task 3 p.27
5.3.2 News Headlines Dataset For Sarcasm Detection p.30
5.3.3 Comparison between different models p.33
6 Conclusions p.34
Reference p.35
參考文獻 [1] Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. Flair: An easy-to-use framework for state-of-the-art nlp. In Pro ceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54–59, 2019.
[2] Alan Akbik, Duncan Blythe, and Roland Vollgraf. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computa tional Linguistics, pages 1638–1649, 2018.
[3] Silvio Amir, Byron C Wallace, Hao Lyu, and Paula Carvalho Ma´rio J Silva. Modelling context with user embeddings for sarcasm detection in social media. arXiv preprint arXiv:1607.00976, 2016.
[4] Francesco Barbieri and Horacio Saggion. Modelling irony in twitter. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 56–64, 2014.
[5] Konstantin Buschmeier, Philipp Cimiano, and Roman Klinger. An impact analysis of features in a classification approach to irony detection in product reviews. In Proceed ings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 42–49, 2014.
[6] Paula Carvalho, Lu´ıs Sarmento, Ma´rio J Silva, and Eug´enio De Oliveira. Clues for detecting irony in user-generated contents: oh...!! it’s” so easy”;-. In Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion, pages 53–56, 2009. 35
[7] Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase represen tations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
[8] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019.
[9] Dmitry Davidov, Oren Tsur, and Ari Rappoport. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the fourteenth con ference on computational natural language learning, pages 107–116. Association for Computational Linguistics, 2010.
[10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[11] Aniruddha Ghosh and Tony Veale. Fracking sarcasm using neural net work. In Pro ceedings of the 7th workshop on computational approaches to subjectivity, sentiment and social media analysis, pages 161–169, 2016.
[12] Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words. In proceedings of the 2015 conference on empirical methods in natural language processing, pages 1003– 1012, 2015.
[13] Roberto Gonz´alez-Iba´nez, Smaranda Muresan, and Nina Wacholder. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the 36 Association for Computational Linguistics: Human Language Technologies: Short Papers-Volume 2, pages 581–586. Association for Computational Linguistics, 2011.
[14] Jeremy Howard and Sebastian Ruder. Fine-tuned language models for text classifi cation. ArXiv, abs/1801.06146, 2018.
[15] Suzana Ili´c, Edison Marrese-Taylor, Jorge A Balazs, and Yutaka Matsuo. Deep contextualized word representations for detecting sarcasm and irony. arXiv preprint arXiv:1809.09795, 2018.
[16] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77, 2020.
[17] Renuka Joshi. Accuracy, precision, recall f1 score: Interpretation of performance measures, Sep 2016.
[18] Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759, 2016.
[19] WenWei Kang. 2019-nlp: Xlnet, Jul 2019.
[20] Anupam Khattri, Aditya Joshi, Pushpak Bhattacharyya, and Mark Carman. Your sentiment precedes you: Using an author’s historical tweets to predict sarcasm. In Proceedings of the 6th workshop on computational approaches to subjectivity, senti ment and social media analysis, pages 25–30, 2015. [21] Roger J Kreuz and Sam Glucksberg. How to be sarcastic: The echoic reminder theory of verbal irony. Journal of experimental psychology: General, 118(4):374, 1989. 37
[22] Sachi Kumon-Nakamura, Sam Glucksberg, and Mary Brown. How about another piece of pie: The allusional pretense theory of discourse irony. Journal of Experi mental Psychology: General, 124(1):3, 1995.
[23] Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
[24] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language repre sentations. arXiv preprint arXiv:1909.11942, 2019.
[25] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
[26] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[27] Rishabh Misra and Prahal Arora. Sarcasm detection using hybrid neural network. arXiv preprint arXiv:1908.07414, 2019.
[28] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
[29] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018. 38
[30] Soujanya Poria, Erik Cambria, Devamanyu Hazarika, and Prateek Vij. A deeper look into sarcastic tweets using deep convolutional neural networks. arXiv preprint arXiv:1610.08815, 2016.
[31] Rolandos Potamias, Georgios Siolas, and Andreas Stafylopatis. A transformer-based approach to irony and sarcasm detection. 11 2019.
[32] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training.
[33] Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. Sarcasm detection on twit ter: A behavioral modeling approach. WSDM 2015 - Proceedings of the 8th ACM International Conference on Web Search and Data Mining, pages 97–106, 02 2015.
[34] Antonio Reyes, Paolo Rosso, and Davide Buscaldi. From humor recognition to irony detection: The figurative language of social media. Data & Knowledge Engineering, 74:1–12, 2012.
[35] Antonio Reyes, Paolo Rosso, and Tony Veale. A multidimensional approach for detecting irony in twitter. Language Resources and Evaluation, 47, 03 2013.
[36] Chi Sun, Luyao Huang, and Xipeng Qiu. Utilizing bert for aspect-based sentiment analysis via constructing auxiliary sentence. arXiv preprint arXiv:1903.09588, 2019.
[37] Joseph Tepperman, David Traum, and Shrikanth Narayanan. ” yeah right”: Sarcasm recognition for spoken dialogue systems. In Ninth international conference on spoken language processing, 2006.
[38] Akira Utsumi. Verbal irony as implicit display of ironic environment: Distinguishing ironic utterances from nonirony. Journal of Pragmatics, 32(12):1777–1806, 2000. 39
[39] Cynthia Van Hee, Els Lefever, and V´eronique Hoste. Semeval-2018 task 3: Irony detection in english tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 39–50, 2018.
[40] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, L ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
[41] Byron C Wallace, Eugene Charniak, et al. Sparse, contextually informed models for irony detection: Exploiting user communities, entities and sentiment. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1035–1044, 2015.
[42] Byron C Wallace, Laura Kertz, Eugene Charniak, et al. Humans require context to infer ironic intent (so computers probably do, too). In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 512–516, 2014.
[43] Chuhan Wu, Fangzhao Wu, Sixing Wu, Junxin Liu, Zhigang Yuan, and Yongfeng Huang. Thu ngn at semeval-2018 task 3: Tweet irony detection with densely con nected lstm and multi-task learning. In Proceedings of The 12th International Work shop on Semantic Evaluation, pages 51–56, 2018.
[44] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language under standing. In Advances in neural information processing systems, pages 5754–5764, 2019. 40 [45] Shanshan Yu, Jindian Su, and Da Luo. Improving bert-based text classification with auxiliary sentence and domain knowledge. IEEE Access, 7:176600–176612, 2019. [46] Meishan Zhang, Yue Zhang, and Guohong Fu. Tweet sarcasm detection using deep neural network. In Proceedings of COLING 2016, The 26th International Conference on Computational Linguistics: Technical Papers, pages 2449–2460, 2016.
[47] Shiwei Zhang, Xiuzhen Zhang, Jeffrey Chan, and Paolo Rosso. Irony detection via sentiment-based transfer learning. Information Processing & Management, 56(5):1633–1644, 2019.
指導教授 孫敏德(Min-Te Sun) 審核日期 2020-7-29
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明