博碩士論文 108522038 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:31 、訪客IP:3.144.92.132
姓名 甘岱融(Tai-Jung Kan)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 應用對抗式Reptile於家電產品網路評論之研究
(Home Appliance Review Research Via Adversarial Reptile)
相關論文
★ 行程邀約郵件的辨識與不規則時間擷取之研究★ NCUFree校園無線網路平台設計及應用服務開發
★ 網際網路半結構性資料擷取系統之設計與實作★ 非簡單瀏覽路徑之探勘與應用
★ 遞增資料關聯式規則探勘之改進★ 應用卡方獨立性檢定於關連式分類問題
★ 中文資料擷取系統之設計與研究★ 非數值型資料視覺化與兼具主客觀的分群
★ 關聯性字組在文件摘要上的探討★ 淨化網頁:網頁區塊化以及資料區域擷取
★ 問題答覆系統使用語句分類排序方式之設計與研究★ 時序資料庫中緊密頻繁連續事件型樣之有效探勘
★ 星狀座標之軸排列於群聚視覺化之應用★ 由瀏覽歷程自動產生網頁抓取程式之研究
★ 動態網頁之樣版與資料分析研究★ 同性質網頁資料整合之自動化研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 對於生產家電產品的廠商來說,自家所推出的產品在社群平台上討論及喜好程度是必須要蒐集的資訊。透過網路評價所給出的回饋,能即時反應該廠牌的產品是否為大眾所接受,更可以進一步分析出產品的何項資訊是較為人所討論,在後續製造新產品的過程中,可以揚長補短,改善自家產品。

本論文方法總共分為兩部分,在第一部分時,我們將人工標記好的家電資料分成三個子任務:命名實體辨識(Name Entity Recognition)、目標種類探索(Aspect Category Extraction)、情緒分類(Sentiment Classification)。在NER的任務中使用BERT-BiLSTM-CRF的模型,而ACE、SC則參考Sun等人的做法,將輸入加入輔助句子的資訊,並使用BERT為基礎的分類模型,用上述方法在三個任務中得出了這些任務的基本效能。在第二部分,在SC的任務中,嘗試針對不同的目標種類訓練任務導向的模型,目標要提升SC的基礎效能。此部分我們結合遷移式學習中,元學習的Reptile演算法及對抗式訓練(Adversarial Training)的概念組成的對抗式Reptile算法。希望能藉由元學習中少樣本學習的優勢以及對抗式訓練的架構組合出一個可以在各種分布的資料中都可以快速訓練好的模型。

本研究擷取社群網站上家電相關產品的討論文章並進行人工標記,再從中對半切出訓練及測試資料集。就研究結果顯示,SC的任務中,在未加入任何遷移式學習的方法下,對於不同目標種類訓練各自的模型的Macro-F1低於基準值(60.1\% v.s. 68.6\%)。而使用了對抗式Reptile架構訓練後的模型,其Macro-F1進步至70.3\%。顯示遷移式學習有助於SC的任務提升其效能。
摘要(英) For manufacturers of home appliances, the overview and preference of their products on social media is the information that must be collected. The feedback given through the online evaluation can instantly reflect whether the products are acceptable to others, and it can further analyze what information about the product is more discussed. It can make up for the shortcomings and improve their own products in the future.

The method in this paper is divided into two parts. In the first part, we divide the manually labeled home appliance data into 3 subtasks: Named Entity Recognition, Aspect Category Extraction, and Sentiment Classification. In the NER task, we use BERT-BiLSTM-CRF model.In ACE, SC tasks, we refer to Sun et al., adding the information of the auxiliary sentence, and using the BERT classification model. We use the above method to get the basic performance of these 3 tasks. In the second part, in the SC task, we try to train task-oriented models for different aspect category. The goal is to improve the basic performance of SC. In this part, we combine the Reptile algorithm of meta learning and adversarial training concepts to propose an adversarial Reptile algorithm. We hope to combine the advantages of few-shot learning in meta learning and the structure of adversarial training framework to create a model that can be quickly trained on various distributed data.

This research extracts reviews on social media about home appliance and manually labeled them, then cut out training and testing data sets in half. The result shows that in the SC task, without any transfer learning method, the Macro-F1 of training task-oriented models for different aspect category is lower than the benchmark(60.1\% v.s. 68.6\%).After using the adversarial Reptile architecture training model, the Macro-F1 has improved to 70.3\% in task-oriented models. It shows that transfer learning method is helpful for SC task.
關鍵字(中) ★ 元學習
★ 遷移式學習
★ 意見目標情緒分析
關鍵字(英) ★ meta learning
★ transfer learning
★ ABSA
論文目次 中文摘要 i
英文摘要 ii
目錄 iv
圖目錄 vi
表目錄 vii
一、介紹 1
二、相關研究 3
2-1 線上評論分析 3
2-2 意見萃取 3
2-3 BERT Pair 5
2-4 遷移式學習 6
2-5 元學習 7
三、資料準備與資料集 10
3.1 資料前處理 10
3.2 標記格式 10
3.3 標記過程 11
3.4 標記介面 11
3.5 初步標記資料統計 11
3.6 標記資料一致性計算 11
3.6.1 cohen kappa值計算 13
3.6.2 實體kappa計算 14
3.6.3 類別標記kappa值計算 15
四、意見萃取 18
4.1 問題定義 18
4.2 資料處理 18
4.3 方法 19
4.4 評估指標 20
4.5 實驗與效能 20
4.5.1 初步實驗結果 21
4.5.2 多模型方法及實驗 21
4.6 小結 24
五、遷移式學習 26
5.1 資料集 26
5.2 對抗式Reptile 26
5.3 實驗 29
5.3.1 與原始效能比較 29
5.3.2 多模型與單模型比較 29
5.3.3 消融實驗 31
5.4 小結 31
六、結論 32
參考文獻 [1] Dr. S. Sarawathi A. Mounika. Classification of book reviews based on sentiment analysis: A survey. JRAR, 6, 2019
[2] Harsh Chheda Kiran Gawande Aashutosh Bhatt, Ankit Patel. Amazon review classification and sentiment analysis. IJCSIT, 6:51075110, 2015.
[3] Manuel Rodriguez-Diaz Ayat Zaki Ahmed. Significant labels in sentiment analysis of online customer reivews of airlines. MDPI, 2020.
[4] Alesx Brandsen, Suzan Verberne, Milco Wansleeben, and Karsten Lambers. Creating a dataset for named entity recognition in the archaeology domain. In Proceedings of The 12th Language Recources and Evaluation Conference, pages 4573 4577, 2020.
[5] R. Caruana. Multitask learning In Encyclopedia of Machine Learning and Data Mining, 1998.
[6] J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019.
[7] Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, M. Zhou, and K. Xu. Adaptive recursive neural network for target-dependent twitter sentiement classification. In ACL, 2014.
[8] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meat-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126-1135. PMLR, 2017
[9] Yaroslav Ganin, E. Ustinova, Hana Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. ArXiv, abs/1505.07818, 2016.
[10] Xiangsheng Zhou Jianfei Yu Rui Xia Hongjie Cai, Yaofeng Tu. Aspect-category based sentiment analysis with hierarchical graph convollutional network. In Proceedings of the 28th International Conference on Computational Linguistics, pages 833-843, Barcelona, Spain (Online), December 2020. International Committee on Computational Linnguistics.
[11] Binxuan Huang, Yanglan Ou, and Kathleen M. Carley. Aspect level sentiment classification with attention-over-attention neural networks. ArXiv, abs/1804.06536, 2018.
[12] Jui-Ting Huang, J. Li, Dong Yu, L. Deng, and Y. Gong. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 7304-7308, 2013.
[13] Jin Li, Shuo Shang, and Ling Shao. Metaner: Named entity recognition with meta-learning. Proceedings of The Web Conference 2020.
[14] Mary McHugh. Interrater reliability: The kappa statistic. Biochemia medica : casopis Hrvatskoga drustva medicinskih biokemicara / HDMB, 22:276-82, 10 2012.
[15] Quoc Thai Nguyen, Thoai Linh Nguyen, N. Luong, and Quoc Hung Ngo. Fine-tuning bert for sentiment analysis of vietnamese reviews. 2020 7th NAFOSTED Conference on Information and Computer Science (NICS), pages 302-307, 2020.
[16] Alex Nichol and John Schulman. Reptile: a scalabble meta learning algorithm. In Doina Precup and Yee Whye Teh, editors, Proceddings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126-1135. PMLR, 2017.
[17] Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35, Dublin, Ireland, August 2014. Association for Computational Linguistics.
[18] Marzih Saeidi, Guillaume Bouchard, Maria Liakata, and Sebastian Riedel. SentiHood: Targeted aspect based sentiment analysis dataset for urban neighbourhoods. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1546-1556, Osaka, Japan, 2016, The COLING 2016 Organizing Committee.
[19] C. Sun, Xipeng Qiu, Yige Xu, and X. Huang. How to fine-tune bert for text classification? ArXiv, abs/1905.05583, 2019.
[20] Chi Sun, Luyao Huang, and Xipeng Qiu. Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computaational Linguistics: HumanLanguage Technologies, Volume 1 (Long and Short Papers), pages 380-385. Association for Computational Linguistics, 2019.
[21] Duyu Tang, Bing Qin, X. Feng, and T. Liu. Effective lstms for target-dependent sentiment classification. In COLING, 2016.
[22] Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. ArXiv, abs/1706.03762, 2017.
[23] Mihaela Vela Walter Kasper. Sentiment analysis for hotel reviews. Computational Linguistics-Applications Conference, pages 45-52, 2011.
[24] Yequan Wang, Minlie Huang, X. Zhu, and L. Zhao. Attention-absed lstm for aspect-level sentiment classification. In EMNLP, 2016.
[25] Tao Li Qing Wang Wei Xue, Wubai Zhou. MTNA: Aneural multi-task model for aspect category classification and aspect term extraction on restaurant reviews. In Proceedings of the Eighth International Joint Conference on Natural Language Porcessing (Volume 2: Short Papers), pages 151-156, Taipei, Taiwan, Novvember 2017. Asian Federation of Natural Language Processing.
[26] Daniel Dahlmeier Xiaokui Xiao Wenya Wang, Sinno Jialin Pan. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 616-626, Austin, Texas, November 2016. Association for Computational Linguistics, September 2017.
[27] Wai Lam Xin Li. Deep multi-task learning for aspect term extraction with memory interaction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2886-2892. Association for Computational Linguistics, September 2017.
[28] Min Yang, Wenting Tu, Jingxuan Wang, F. Xu, and Xiaojun Chen. Attention based lstm for target dependent sentiment classification. In AAAI, 2017.
[29] Yue Zhang and Jiangming Liu. Attention modeling for targeted sentiment. In EACL, 2017.
[30] 邱威誠, 應用歌手辨識及情感分析於目標情感偵測與分析之研究. 2020.
指導教授 張嘉惠(Chia-Hui Wang) 審核日期 2021-8-18
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明