中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86646
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41776711      Online Users : 2117
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/86646


    Title: 以同反義詞典調整的詞向量對下游自然語言任務影響之實證研究;Adjusting Word Embeddings Based on the Dictionary of Synonyms and Antonyms and Its Influence on Downstream NLP Tasks -- an Empirical Study
    Authors: 陳?澤;Chen, Kun-Ze
    Contributors: 資訊工程學系
    Keywords: 調整詞向量;同義詞;反義詞;自然語言處理任務;Adjusting Word Embedding;Synonyms;Antonyms;Natural Language Processing Task
    Date: 2021-08-09
    Issue Date: 2021-12-07 13:04:35 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 「向量」的概念目前已被廣泛運用在機器學習的領域裡。在自然語言處理領域中,學者將輸入的字詞轉換為「詞向量」,以讓電腦進行方便且有效的模型訓練。而現今學者也致力於研究如何讓詞向量更能表達出符合文本與詞庫中的字詞關係,大致上能將這些訓練方法分為兩類,一是使用文本加上詞庫中的字詞關係同時訓練,另一種則是對現今已存在的詞向量加上字詞關係同時訓練再進行訓練。

    本研究的訓練方法屬於第二類,我們透過自注意力機制對現有的詞向量進行調整,使詞向量能學習到詞庫中的同義詞反義詞關係。實驗發現透過自注意力機制訓練出的新的詞向量更符合詞庫中的字詞關係的詞向量,但將此詞向量對下游的自然語言處理任務進行處理時,卻得到比調整前的詞向量更差的結果。;The concept of "vector" has been widely used in machine learning. For example, in the field of natural language processing, researchers convert words into vectors, also known as word embeddings, so that computers can access a fixed-length vector as features for model training. Researchers also study methodologies to generate word embeddings that better express the semantic relationship between the words specified in the lexicon. These methods can be divided into two categories. The first type is to generate word embeddings by simultaneously considering both the word co-appearance relationship in a given corpus and lexicon knowledge, e.g., synonyms or antonyms. The second type is to adjust existing (pre-trained) word embeddings with lexicon knowledge.

    We study the second type of method in this thesis. We adjust the pre-trained word embeddings through a self-attention mechanism so that the word embeddings can preserve the relationship between synonyms and antonyms in the lexicon. Experimental results show that the adjusted word embeddings indeed better keep synonym and antonym information. However, if these word embeddings are used as the input of the downstream natural language processing task, it will get worse results than the word embeddings before adjustment.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML37View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明