English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 42119296      線上人數 : 1321
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86529


    題名: 構件聚焦多頭共同注意力網路在基於面向的情感分析;Aspect-based sentiment analysis with component focusing multi-head coattention networks
    作者: 廖源昱;Liao, Yuan-Yu
    貢獻者: 資訊管理學系
    關鍵詞: 深度學習;神經網路;情感分析;BERT
    日期: 2021-06-29
    上傳時間: 2021-12-07 12:56:41 (UTC+8)
    出版者: 國立中央大學
    摘要: 基於面向的情感分析 (Aspect-based Sentiment Analysis; ABSA) 目的為從文本中預測特定目標的情感極性,過去這類任務的研究大多採用文字嵌入再透過RNN網路進行編碼,近年開始有人使用注意力機制去學習文本和目標之間的關係,但多文字目標和使用平均池化的問題存在這類任務的許多研究當中,本文提出構件聚焦多頭共同注意力網路 (Component Focusing Multi-head Coattention Networks; CF-MCAN) 模型,包含擴展文本、構件聚焦、多頭共同注意力三個模組來改善過去所遇到的問題,擴展文本能夠讓BERT的能力在ABSA任務上得到更好的發揮,構件聚焦讓文本能夠將形容詞及副詞的權重提高,改善過去只使用平均池化,將每個字都視為同等重要的問題,多頭共同注意力網路能夠在學習文本表示前,先學習多文字目標中的重要字詞,並且可以讓序列型資料對序列型資料進行注意力機制,在三個資料集上與過去論文進行比較,我們透過實驗證明提出模型的有效性。;The purpose of Aspect-based Sentiment Analysis (ABSA) is to predict the sentiment polarity of a specific target from the text. In the past, the majority of the related research used word embedding and then encoding through the RNN network. In recent years, some researchers have started to learn the relationship between the context and the target by using attention mechanism, but multi-word targets and the use of average pooling arise some problems in many studies of this type of task. This paper proposes component focusing multi-head coattention networks (CF-MCAN) model which contains three modules: extended context, component focusing, and multi-headed coattention, to improve the problems encountered in the past. The extended context can exert better BERT′s ability in the ABSA task, and the component focusing allows the context to increase the weight of adjectives and adverbs, improving the problem of using average pooling to treat every word as an equally important issue. The multi-head coattention network can learn the important words in the multi-word target before learning the context representation, and can make the sequence data perform the attention mechanism on the sequence data. Comparing three data sets with past papers, our research proves the effectiveness of the proposed model through experiments.
    顯示於類別:[資訊管理研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML62檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明