DC 欄位 |
值 |
語言 |
DC.contributor | 資訊工程學系 | zh_TW |
DC.creator | 林冠佑 | zh_TW |
DC.creator | Kuan-Yu Lin | en_US |
dc.date.accessioned | 2020-7-20T07:39:07Z | |
dc.date.available | 2020-7-20T07:39:07Z | |
dc.date.issued | 2020 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=107522053 | |
dc.contributor.department | 資訊工程學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 自從詞向量被廣泛應用在許多自然語言處理任務,且取得不錯的成果後,學者們開始相信詞向量是可以有效學習到詞義資訊的,並開始研究如何提升詞向量的品質。本論文認為詞向量主要是透過上下文資訊進行學習,沒有利用到人類編撰的字詞關係,如:同/反義字,知識圖譜等,且我們推測詞向量在辨別同義詞與反義詞的能力上仍有進步空間,加入從字典中萃取出的知識應能改善,然而,過去相關的研究僅使用pairwise的方法對同義詞與反義詞進行調整,這種方法沒有辦法同時考慮一詞與其所有同義詞和反義詞之間的關係,因此本論文提出了listwise的方法來對詞向量進行調整,提升詞向量的品質。
經過實驗,本論文發現採用全局資訊的模型均優於只採用局部資訊的模型,其中學習分配不同注意力在同義詞和反義詞中不同的詞上,再結合這些資訊調整詞向量的自注意力機制更能有效的利用全局資訊,因此本論文選擇使用自注意力機制做為編碼器,在訓練後使用從字典中萃取出的同義詞與反義詞資訊調整詞向量,提升詞向量的品質。為了更多的提升詞向量的品質,本論文嘗試了正規化、殘差連結、多頭式自注意力機制、更深層的神經網路等方法,並設計實驗說明它們對模型的影響。
最後,本論文設計實驗證明經本方法調整後,使用少量文本預訓練的詞向量在同義詞任務中表現可以超越未調整但使用大量文本預訓練的詞向量,並從結果中發現同義詞相較於反義詞在相似度任務上是更有用的資訊,且同義詞和反義詞資訊並不是越多越好,品質也會影響調整後的結果。 | zh_TW |
dc.description.abstract | Since word embedding has become a standard technique working excellently in various natural language processing (NLP) tasks, there has been much research on improving the quality of the word embeddings. We argue that the word embeddings are mainly learned through contextual information but ignore the relationship (e.g., synonyms, antonyms, and knowledge graph) of words compiled by humans. We speculate that including human compiled information may improve the quality of the word embeddings. Unlike previous works, we purpose a listwise method that can consider the relations between a word and its synonyms and antonyms.
Experimental results show that our approach to adjust the word embeddings trained from small corpus yields comparable, sometimes even better, results than the word embeddings trained with a large corpus. Additionally, we show that both the quantity and quality of synonyms and antonyms affect the performance of our work. Finally, we show that models utilizing global information outperform the ones utilizing local information in most cases. | en_US |
DC.subject | 詞向量 | zh_TW |
DC.subject | 自注意力機制 | zh_TW |
DC.subject | 同義詞 | zh_TW |
DC.subject | 反義詞 | zh_TW |
DC.subject | Word embeddings | en_US |
DC.subject | Self-attention | en_US |
DC.subject | Synonym | en_US |
DC.subject | Antonym | en_US |
DC.subject | Post-training | en_US |
DC.subject | Listwise | en_US |
DC.title | 基於自注意力機制產生的無方向性序列編碼器使用同義詞與反義詞資訊調整詞向量 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | Adjusting Word Embeddings with Synonyms and Antonyms based on Undirected List Encoder Generated from Self-Attention | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |