類別不平衡(Class Imbalance)問題一直是資料探勘領域中重要且頻繁發生的議題,此情況發生於資料集中某一類別樣本數遠大於另一類別樣本數時,導致資料產生偏態分布,此時一般分類器為了追求高分類正確率,建立出的預測模型將會傾向將小類資料(Minority Class)誤判為大類資料(Majority Class),導致無法建立出良好的小類資料分類規則,而這樣的現象在真實世界中也越來越常見,舉凡醫學診斷、錯誤偵測、臉部辨識等不同領域都經常發生資料的類別不平衡現象。 本論文提出以分群技術為基礎的前處理方法,利用分群技術將大類資料分為數個子群,形成多類別資料(Multiclass Data),此架構能夠有效降低訓練資料集的類別不平衡比率、減少分類器訓練時間及提升分類正確率。 本論文實驗了44個KEEL小型資料集與8個NASA高維度資料集,方法架構中分別使用兩種分群技術(Affinity Propagation、K-means),並搭配五種分類器(C4.5、MLP、Naïve Bayse、SVM、k-NN(k=5))及整體學習法建立分類模型,比較不同分群方式、不同分類器與不同分群數設定的AUC(Area Under Curve)結果,找出分群式前處理架構的最佳配適,再與文獻中傳統方法、整體學習法進行正確率比較。最後,KEEL資料集的實驗結果顯示無論是搭配Affinity Propagation或K-means(K=5),k-NN(k=5)演算法是最佳選擇;而NASA資料集實驗結果則顯示,分群式前處理架構應用於高維度資料集表現亦優於文獻。 ;The class imbalance problem is an important issue in data mining. It occurs when the number of samples in one class is much larger than the other classes. Traditional classifiers tend to misclassify most samples of the minority class into the majority class for maximizing the overall accuracy. This phenomenon makes it hard to establish a good classification rule for the minority class. The class imbalance problem often occurs in many real world applications, such as fault diagnosis, medical diagnosis and face recognition. To deal with the class imbalance problem, a clustering-based data preprocessing approach is proposed, where two different clustering techniques including affinity propagation clustering and K-means clustering are used individually to divide the majority class into several subclasses resulting in multiclass data. This approach can effectively reduce the class imbalance ratio of the training dataset, shorten the class training time and improve classification performance. Our experiments based on forty-four small class imbalance datasets from KEEL and eight high-dimensional datasets from NASA to build five types of classification models, which are C4.5, MLP, Naïve Bayes, SVM and k-NN (k=5). In addition, we also employ the classifier ensemble algorithm. This research tries to compare AUC results between different clustering techniques, different classification models and the number of clusters of K-means clustering in order to find out the best configuration of the proposed approach and compare with other literature methods. Finally, the experimental results of the KEEL datasets show that k-NN (k=5) algorithm is the best choice regardless of whether affinity propagation or K-means (K=5); the experimental results of NASA datasets show that the performance of the proposed approach is superior to the literature methods for the high-dimensional datasets.