近年來,meta-learning在自然語言處理領域中被大量研究。小樣本學習對特殊領域標註資料不易取得的特性來說非常有幫助,因此我們試著用生醫的標註資料做meta-testing的實驗。在本文中,我們將利用小樣本DDAE(Disease-Disease Association Extraction, DDAE)的資料在結合meta-learning及預訓練模型BERT的模型上做meta-testing。由於小樣本DDAE有類別不平衡的情況,我們利用了類別權重的方式來調整損失函數,並探討當資料集中有不被關注的類別,例如無相關(null)、其他(others)等,且此類別的佔比在資料中又高時,我們使用一個超參數來調整權重,產生新的損失函數,名為Null-excluded weighted cross-entropy (NEWCE),解決不被關注且佔比大的類別問題,讓模型關注在重要類別上。我們展示了預訓練模型及meta-learning的結合優於直接微調預訓練模型,並且在面對小樣本類別不平衡時,如何調整權重。;In recent years, meta-learning has been extensively studied in the field of natural language processing. For the specific fields that are not easy to do the annotated, Few-shot learning is really helpful. Therefore, we attempt to use biomedical annotated data to do meta-testing experiments. In this article, we use the Few-shot DDAE (Disease-Disease Association Extraction, DDAE) data to do meta-testing on a model that combines meta-learning and pre-training model BERT. Due to the class imbalance issue in Few-shot DDAE, we use the class weighted to adjust the loss function. We also focus on those minor categories, like NULL or Others, etc. When those data occupy most of the proportion, we use a hyperparameter to adjust the weight and generate a new loss function called Null-excluded weighted cross-entropy (NEWCE) to solve the problem and let the model focus on major categories. We show that the combination of the pre-training model and meta-learning is better than directly fine-tuning the pre-training model, and how to adjust the weight from the imbalance of minor categories.