在現實的生活所產生的二元分類數據中,大多都存在著不平衡的問題,如:破產資訊、罹患罕見疾病、因意外造成傷亡等。傳統的二元分類演算法,大多在訓練分類器的過程中,常會因類別不平衡而產生預測的偏差進而影響到分類的正確率,其結果也往往會偏向多數類樣本。近年來,學者及研究人員針對類別不平衡問題也提出了相當多的解決方式,卻沒有相關的研究篩選出較適用的基底分類器。 本研究希望能透過所提出的研究架構,並使用KEEL網站上研究二元分類問題的44個不同比例資料集進行實驗,籍此找出較適用於研究類別不平衡問題的基底分類器,提供學者及研究人員參考。 ;In our daily life, most of the datasets possess the class imbalance problem, in which one class contains a very large number of data samples whereas another class for a very small number of data samples. On example is bankruptcy information, suffering from rare diseases, due to accidental casualties and so on. In the process of training a classifier, the traditional binary classification algorithms will generate prediction bias because of class imbalanced datasets, and the results also tend to favor the majority class samples. In recent years, a considerable number of scholars raised many solutions for solving the class imbalanced problem. In this study, different from related works that proposing novel algorithms to enhance the performances of existing classification techniques, we focus on finding out the best baseline classifier for the class imbalance domain problem. The finding of this study is able to provide the guideline for future research to compare their novel algorithms to the identified baseline classifier. The experiments are based on 44 various domain datasets containing different imbalance ratios and three popular classifiers, i.e. J48, MLP, and SVM are constructed and compared. Moreover, classifier ensembles by the bagging and boosting method are also developed. The results show that the bagging based MLP classifier ensembles perform the best in terms of the AUC rate.