中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/86646
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41784663      線上人數 : 1259
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86646


    題名: 以同反義詞典調整的詞向量對下游自然語言任務影響之實證研究;Adjusting Word Embeddings Based on the Dictionary of Synonyms and Antonyms and Its Influence on Downstream NLP Tasks -- an Empirical Study
    作者: 陳?澤;Chen, Kun-Ze
    貢獻者: 資訊工程學系
    關鍵詞: 調整詞向量;同義詞;反義詞;自然語言處理任務;Adjusting Word Embedding;Synonyms;Antonyms;Natural Language Processing Task
    日期: 2021-08-09
    上傳時間: 2021-12-07 13:04:35 (UTC+8)
    出版者: 國立中央大學
    摘要: 「向量」的概念目前已被廣泛運用在機器學習的領域裡。在自然語言處理領域中,學者將輸入的字詞轉換為「詞向量」,以讓電腦進行方便且有效的模型訓練。而現今學者也致力於研究如何讓詞向量更能表達出符合文本與詞庫中的字詞關係,大致上能將這些訓練方法分為兩類,一是使用文本加上詞庫中的字詞關係同時訓練,另一種則是對現今已存在的詞向量加上字詞關係同時訓練再進行訓練。

    本研究的訓練方法屬於第二類,我們透過自注意力機制對現有的詞向量進行調整,使詞向量能學習到詞庫中的同義詞反義詞關係。實驗發現透過自注意力機制訓練出的新的詞向量更符合詞庫中的字詞關係的詞向量,但將此詞向量對下游的自然語言處理任務進行處理時,卻得到比調整前的詞向量更差的結果。;The concept of "vector" has been widely used in machine learning. For example, in the field of natural language processing, researchers convert words into vectors, also known as word embeddings, so that computers can access a fixed-length vector as features for model training. Researchers also study methodologies to generate word embeddings that better express the semantic relationship between the words specified in the lexicon. These methods can be divided into two categories. The first type is to generate word embeddings by simultaneously considering both the word co-appearance relationship in a given corpus and lexicon knowledge, e.g., synonyms or antonyms. The second type is to adjust existing (pre-trained) word embeddings with lexicon knowledge.

    We study the second type of method in this thesis. We adjust the pre-trained word embeddings through a self-attention mechanism so that the word embeddings can preserve the relationship between synonyms and antonyms in the lexicon. Experimental results show that the adjusted word embeddings indeed better keep synonym and antonym information. However, if these word embeddings are used as the input of the downstream natural language processing task, it will get worse results than the word embeddings before adjustment.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML37檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明