博碩士論文 107552004 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:10 、訪客IP:18.118.1.232
姓名 廖莉庭(Li-Ting Liao)  查詢紙本館藏   畢業系所 資訊工程學系在職專班
論文名稱 通過與模型無關的元學習進行少量疾病-疾病關聯提取
(Few-shot Disease-Disease Association Extraction via Model-Agnostic Meta-Learning)
相關論文
★ A Real-time Embedding Increasing for Session-based Recommendation with Graph Neural Networks★ 基於主診斷的訓練目標修改用於出院病摘之十代國際疾病分類任務
★ 混合式心臟疾病危險因子與其病程辨識 於電子病歷之研究★ 基於 PowerDesigner 規範需求分析產出之快速導入方法
★ 社群論壇之問題檢索★ 非監督式歷史文本事件類型識別──以《明實錄》中之衛所事件為例
★ 應用自然語言處理技術分析文學小說角色 之關係:以互動視覺化呈現★ 基於生醫文本擷取功能性層級之生物學表徵語言敘述:由主成分分析發想之K近鄰算法
★ 基於分類系統建立文章表示向量應用於跨語言線上百科連結★ Code-Mixing Language Model for Sentiment Analysis in Code-Mixing Data
★ 藉由加入多重語音辨識結果來改善對話狀態追蹤★ 對話系統應用於中文線上客服助理:以電信領域為例
★ 應用遞歸神經網路於適當的時機回答問題★ 使用多任務學習改善使用者意圖分類
★ 使用轉移學習來改進針對命名實體音譯的樞軸語言方法★ 基於歷史資訊向量與主題專精程度向量應用於尋找社群問答網站中專家
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 近年來,meta-learning在自然語言處理領域中被大量研究。小樣本學習對特殊領域標註資料不易取得的特性來說非常有幫助,因此我們試著用生醫的標註資料做meta-testing的實驗。在本文中,我們將利用小樣本DDAE(Disease-Disease Association Extraction, DDAE)的資料在結合meta-learning及預訓練模型BERT的模型上做meta-testing。由於小樣本DDAE有類別不平衡的情況,我們利用了類別權重的方式來調整損失函數,並探討當資料集中有不被關注的類別,例如無相關(null)、其他(others)等,且此類別的佔比在資料中又高時,我們使用一個超參數來調整權重,產生新的損失函數,名為Null-excluded weighted cross-entropy (NEWCE),解決不被關注且佔比大的類別問題,讓模型關注在重要類別上。我們展示了預訓練模型及meta-learning的結合優於直接微調預訓練模型,並且在面對小樣本類別不平衡時,如何調整權重。
摘要(英) In recent years, meta-learning has been extensively studied in the field of natural language processing. For the specific fields that are not easy to do the annotated, Few-shot learning is really helpful. Therefore, we attempt to use biomedical annotated data to do meta-testing experiments. In this article, we use the Few-shot DDAE (Disease-Disease Association Extraction, DDAE) data to do meta-testing on a model that combines meta-learning and pre-training model BERT. Due to the class imbalance issue in Few-shot DDAE, we use the class weighted to adjust the loss function. We also focus on those minor categories, like NULL or Others, etc. When those data occupy most of the proportion, we use a hyperparameter to adjust the weight and generate a new loss function called Null-excluded weighted cross-entropy (NEWCE) to solve the problem and let the model focus on major categories. We show that the combination of the pre-training model and meta-learning is better than directly fine-tuning the pre-training model, and how to adjust the weight from the imbalance of minor categories.
關鍵字(中) ★ 元學習
★ 小樣本學習
★ 疾病關聯提取
★ 權重損失函數
★ 類別不平衡
關鍵字(英) ★ Meta-learning
★ Few-shot learning
★ Disease-Disease Annotation Extraction
★ Weighted Loss Function
★ Class Imbalanced
論文目次 摘要 I
ABSTRACT II
誌謝 III
目錄 IV
圖目錄 V
表目錄 VI
一、 簡介 1
二、 文獻探討 4
2-1 小樣本學習和元學習 4
2-1-1 小樣本訓練(Few-shot learning) 4
2-1-2 元學習(Meta-learning) 5
2-2 疾病關係提取 6
三、 研究方法 7
3-1 問題定義 7
3-2 用於分類任務的 META-NLP 9
3-3 用損失函數處理數據不平衡 13
3-4 生成小樣本DDAE任務 15
四、 實驗與評估 16
4-1 資料集 16
4-2 評估方式 17
4-3 實驗結果 18
五、 討論 24
六、 總結 26
參考文獻 27
參考文獻 [1] Ochal, M., Patacchiola, M., Storkey, A., Vazquez, J. and Wang, S. "Few-Shot Learning With Class Imbalance". Arxiv.Org, 2021

[2] Mateusz Buda, Atsuto Maki, and Maciej A. Ma-zurowski. “A systematic study of the class imbalance problem in convolutional neural networks.” Neural Networks, 106:249–259, 2018.

[3] Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H. and He, Q. "A Comprehensive Survey On Transfer Learning". Arxiv.Org, 2019

[4] Wang, J., Chen, Y., Hao, S., Feng, W. and Shen, Z. "Balanced Distribution Adaptation For Transfer Learning". Arxiv.Org, 2018

[5] Jurgen Schmidhuber. Evolutionary principles in self-referential learning. on learning now to learn: The meta-meta-meta...-hook. Diploma thesis, Technische Universitat Munchen, Germany, 14 May, 1987.

[6] Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Gener-alizing from a few examples: A survey on few-shot learning,” ACM Comput. Surv., vol. 53, no. 3, Jun. 2020.

[7] Trapit Bansal, Rishikesh Jha, Tsendsuren Munkhdalai, and Andrew McCallum. “Self-supervised meta-learning for few-shot natural language classification tasks.” In Empirical Methods in Natural Language Processing (EMNLP), 2020b.

[8] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. “Bert: Pretraining of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805, 2018.

[9] Chelsea Finn, Pieter Abbeel, and Sergey Levine. “Model-agnostic meta-learning for fast adaptation of deep networks.” In ICML, pp. 1126–1135, 2017.

[10] Lai P, Lu W, Kuo T, Chung C, Han J, Tsai R, Horng J. “Using a Large Margin Context-Aware Convolutional Neural Network to Automatically Ex-tract Disease-Disease Association.” from Literature: Comparative Analytic Study JMIR Med Inform, 2019.

[11] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H. and Kang, J. "Biobert: A Pre-Trained Biomedical Language Representation Model For Biomedical Text Mining". Bioinformatics, 2019.

[12] Jake Snell, Kevin Swersky, and Richard S. Zemel. “Prototypical networks for few-shot learning.” In Advances in Neural Information Processing Systems, 2017.

[13] G Koch, R Zemel, and R Salakhutdinov. “Siamese neural networks for one-shot image recognition.” In ICML Deep Learning workshop, 2015.

[14] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. “Matching networks for one shot learning.” In Advances in Neural Information Processing Systems, pages 3630–3638, 2016.

[15] Sachin Ravi and Hugo Larochelle. “Optimization as a model for few-shot learning.” International Conference on Learning Representations, 2017.

[16] A. Suratanee, K. Plaimas. “DDA: a Novel Network-based Scoring Method to Identify Disease-Disease Associations.” Bioinforma. Biol. Insights, pp. 175-186, 9 (2015).

[17] J. Yang, S.J. Wu, S.Y. Yang, J.W. Peng, S.N. Wang, F.Y. Wang, Y.X. Song, T. Qi, Y.X. Li, Y.Y. Li “DNetDB: The human disease network database based on dysfunctional regulation mechanism” BMC Syst. Biol., 10 (2016)

[18] Li, Ying, and Wencong Huang. "Constructing And Analyzing A Disease Network Based On Proteins". E3S Web Of Conferences, vol 131, 2019, p. 01010. EDP Sciences

[19] Sun,K., Gonc¸alves,J.P., Larminie,C. and Przulj,N. “Predicting disease associations via biological network analysis.” BMC Bioinformatics, 15, 304. (2014).

[20] Liu CC, Tseng YT, Li W, Wu CY, Mayzus I, Rzhetsky A, et al. “DiseaseConnect: a comprehensive web server for mechanism-based disease-disease connections. Nucleic Acids Res.”42:W137–46, 2014.

[21] Gligorijevic, D., Stojanovic, J., Djuric, N., Rados-avljevic, V., Grbovic, M., Kulathinal, R. J. and Obradovic, Z. Large-scale discovery of disease–disease and disease–gene associations. Sci. Rep. 6, 32404 (2016).

[22] Haynes, W. A., Vashisht, R., Vallania, F., Liu, C., Gaskin, G. L., Bongen, E., Lofgren, S., Sweeney, T. E., Utz, P. J., Shah, N. H. and Khatri, P. "Integrated Molecular, Clinical, And Ontological Analysis Identifies Overlooked Disease Relationships". 2017.

[23] Wang, Y., Zhao, Y., Therneau, T. M., Atkinson, E. J., Tafti, A. P., Zhang, N., Amin, S., Limper, A. H., Khosla, S. and Liu, H. "Unsupervised Machine Learning For The Discovery Of Latent Disease Clusters And Patient Subgroups Using Electronic Health Records". Journal Of Biomedical Informatics, vol 102, 2020, p. 103364.

[24] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. and Polosukhin, I. "Attention Is All You Need". Arxiv.Org, 2017.

[25] Bansal, T., Jha, R. and McCallum, A. "Learning To Few-Shot Learn Across Diverse Natural Language Classification Tasks". Arxiv.Org, 2019.

[26] Wilson L Taylor. “Cloze procedure: A new tool for measuring readability.” Journalism Bulletin, 30(4):415–433, 1953.

[27] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. “Glue: A multi-task benchmark and analysis platform for natural language understanding.” In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, 2018a.

[28] Bowman, S. R., Angeli, G., Potts, C., & Manning, C. D. “The SNLI Corpus.” (2015).

[29] Flennerhag, S., Rusu, A. A., Pascanu, R., Visin, F., Yin, H. and Hadsell, R. "Meta-Learning With Warped Gradient Descent". Arxiv.Org, 2019.

[30] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. “Class-balanced loss based on effective number of samples.” In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

[31] Tsung-yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dolla ́r. “Focal Loss for Dense Object Detection.” IEEE International Conference on Computer Vision (ICCV), 2017.

[32] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H. and Kang, J. “Biobert: A Pre-Trained Biomedical Language Representation Model For Biomedical Text Mining”. Bioinformatics, 2019.
指導教授 蔡宗翰 審核日期 2021-10-27
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明