English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78818/78818 (100%)
造訪人次 : 35000672      線上人數 : 478
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/86514


    題名: 通過與模型無關的元學習進行少量疾病-疾病關聯提取;Few-shot Disease-Disease Association Extraction via Model-Agnostic Meta-Learning
    作者: 廖莉庭;Liao, Li-Ting
    貢獻者: 資訊工程學系在職專班
    關鍵詞: 元學習;小樣本學習;疾病關聯提取;權重損失函數;類別不平衡;Meta-learning;Few-shot learning;Disease-Disease Annotation Extraction;Weighted Loss Function;Class Imbalanced
    日期: 2021-10-27
    上傳時間: 2021-12-07 12:55:19 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,meta-learning在自然語言處理領域中被大量研究。小樣本學習對特殊領域標註資料不易取得的特性來說非常有幫助,因此我們試著用生醫的標註資料做meta-testing的實驗。在本文中,我們將利用小樣本DDAE(Disease-Disease Association Extraction, DDAE)的資料在結合meta-learning及預訓練模型BERT的模型上做meta-testing。由於小樣本DDAE有類別不平衡的情況,我們利用了類別權重的方式來調整損失函數,並探討當資料集中有不被關注的類別,例如無相關(null)、其他(others)等,且此類別的佔比在資料中又高時,我們使用一個超參數來調整權重,產生新的損失函數,名為Null-excluded weighted cross-entropy (NEWCE),解決不被關注且佔比大的類別問題,讓模型關注在重要類別上。我們展示了預訓練模型及meta-learning的結合優於直接微調預訓練模型,並且在面對小樣本類別不平衡時,如何調整權重。;In recent years, meta-learning has been extensively studied in the field of natural language processing. For the specific fields that are not easy to do the annotated, Few-shot learning is really helpful. Therefore, we attempt to use biomedical annotated data to do meta-testing experiments. In this article, we use the Few-shot DDAE (Disease-Disease Association Extraction, DDAE) data to do meta-testing on a model that combines meta-learning and pre-training model BERT. Due to the class imbalance issue in Few-shot DDAE, we use the class weighted to adjust the loss function. We also focus on those minor categories, like NULL or Others, etc. When those data occupy most of the proportion, we use a hyperparameter to adjust the weight and generate a new loss function called Null-excluded weighted cross-entropy (NEWCE) to solve the problem and let the model focus on major categories. We show that the combination of the pre-training model and meta-learning is better than directly fine-tuning the pre-training model, and how to adjust the weight from the imbalance of minor categories.
    顯示於類別:[資訊工程學系碩士在職專班 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML132檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明