中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/83945
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 81570/81570 (100%)
Visitors : 47066031      Online Users : 440
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/83945


    Title: 基於自注意力機制產生的無方向性序列編碼器使用同義詞與反義詞資訊調整詞向量;Adjusting Word Embeddings with Synonyms and Antonyms based on Undirected List Encoder Generated from Self-Attention
    Authors: 林冠佑;Lin, Kuan-Yu
    Contributors: 資訊工程學系
    Keywords: 詞向量;自注意力機制;同義詞;反義詞;Word embeddings;Self-attention;Synonym;Antonym;Post-training;Listwise
    Date: 2020-07-20
    Issue Date: 2020-09-02 17:44:22 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 自從詞向量被廣泛應用在許多自然語言處理任務,且取得不錯的成果後,學者們開始相信詞向量是可以有效學習到詞義資訊的,並開始研究如何提升詞向量的品質。本論文認為詞向量主要是透過上下文資訊進行學習,沒有利用到人類編撰的字詞關係,如:同/反義字,知識圖譜等,且我們推測詞向量在辨別同義詞與反義詞的能力上仍有進步空間,加入從字典中萃取出的知識應能改善,然而,過去相關的研究僅使用pairwise的方法對同義詞與反義詞進行調整,這種方法沒有辦法同時考慮一詞與其所有同義詞和反義詞之間的關係,因此本論文提出了listwise的方法來對詞向量進行調整,提升詞向量的品質。

    經過實驗,本論文發現採用全局資訊的模型均優於只採用局部資訊的模型,其中學習分配不同注意力在同義詞和反義詞中不同的詞上,再結合這些資訊調整詞向量的自注意力機制更能有效的利用全局資訊,因此本論文選擇使用自注意力機制做為編碼器,在訓練後使用從字典中萃取出的同義詞與反義詞資訊調整詞向量,提升詞向量的品質。為了更多的提升詞向量的品質,本論文嘗試了正規化、殘差連結、多頭式自注意力機制、更深層的神經網路等方法,並設計實驗說明它們對模型的影響。

    最後,本論文設計實驗證明經本方法調整後,使用少量文本預訓練的詞向量在同義詞任務中表現可以超越未調整但使用大量文本預訓練的詞向量,並從結果中發現同義詞相較於反義詞在相似度任務上是更有用的資訊,且同義詞和反義詞資訊並不是越多越好,品質也會影響調整後的結果。;Since word embedding has become a standard technique working excellently in various natural language processing (NLP) tasks, there has been much research on improving the quality of the word embeddings. We argue that the word embeddings are mainly learned through contextual information but ignore the relationship (e.g., synonyms, antonyms, and knowledge graph) of words compiled by humans. We speculate that including human compiled information may improve the quality of the word embeddings. Unlike previous works, we purpose a listwise method that can consider the relations between a word and its synonyms and antonyms.

    Experimental results show that our approach to adjust the word embeddings trained from small corpus yields comparable, sometimes even better, results than the word embeddings trained with a large corpus. Additionally, we show that both the quantity and quality of synonyms and antonyms affect the performance of our work. Finally, we show that models utilizing global information outperform the ones utilizing local information in most cases.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML106View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明