DC 欄位 |
值 |
語言 |
DC.contributor | 資訊管理學系 | zh_TW |
DC.creator | 姚冠廷 | zh_TW |
DC.creator | Guan-Ting Yao | en_US |
dc.date.accessioned | 2017-7-14T07:39:07Z | |
dc.date.available | 2017-7-14T07:39:07Z | |
dc.date.issued | 2017 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=104423021 | |
dc.contributor.department | 資訊管理學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 類別非平衡(Class Imbalance)問題是資料探勘領域中重要且頻繁發生的議題,此現象發生於資料集中某一類別樣本數大於另一類別樣本數時,導致資料產生偏態分布,此時,傳統分類器為了追求高分類正確率,建立出的預測模型將會傾向將小類樣本(Minority Class)誤判為大類樣本(Majority Class),導致珍貴的少類樣本無法建立出良好的分類規則,這樣的現象在真實世界中也越來越常見,舉凡醫學診斷、錯誤偵測、臉部辨識等不同領域都經常發生資料的類別非平衡現象。
為了解決類別非平衡問題,本論文提出一個以分群技術為基礎結合樣本選取(Instance Selection)的資料取樣概念,嘗試從大類樣本挑選出具有代表性的資料,形成一個兩階段混合式的資料前處理架構,這樣的架構除了有效減少抽樣誤差、降低資料的類別非平衡比率(Imbalance Ratio)、減少分類器的訓練時間外,還可以提升分類的正確率。
本論文將以KEEL中44個類別非平衡資料集進行實驗,方法架構中嘗試了2種分群方法搭配3種樣本選取演算法以探討最佳配適,再以4種分類器搭配整體學習法建立分類模型,以了解不同分類器在研究架構中的表現,最後,實驗將採用五折交叉驗證之平均AUC結果作為評估指標,再與文獻中傳統方法、整體學習法進行正確率比較,並討論非平衡比率對於實驗架構的影響。實驗發現本研究提出的混合式前處理架構,在多數分類模型下的表現皆優於比較文獻方法,其中MLP分類器搭配Bagging整體學習法為表現最佳的分類模型,其AUC平均正確率高達92%。 | zh_TW |
dc.description.abstract |
The class imbalance problem is an important issue in data mining. The class skewed distribution occurs when the number of examples that represent one class is much lower than the ones of the other classes. The traditional classifiers tend to misclassify most samples in the minority class into the majority class because of maximizing the overall accuracy. This phenomenon limits the construction of effective classifiers for the precious minority class. This problem occurs in many real-world applications, such as fault diagnosis, medical diagnosis and face recognition.
To deal with the class imbalance problem, I proposed a two-stage hybrid data preprocessing framework based on clustering and instance selection techniques. This approach filters out the noisy data in the majority class and can reduce the execution time for classifier training. More importantly, it can decrease the effect of class imbalance and perform very well in the classification task.
Our experiments using 44 class imbalance datasets from KEEL to build four types of classification models, which are C4.5, k-NN, Naïve Bayes and MLP. In addition, the classifier ensemble algorithm is also employed. In addition, two kinds of clustering techniques and three kinds of instance selection algorithms are used in order to find out the best combination suited for the proposed method. The experimental results show that the proposed framework performs better than many well-known state-of-the-art approaches in terms of AUC. In particular, the proposed framework combined with bagging based MLP ensemble classifiers perform the best, which provide 92% of AUC. | en_US |
DC.subject | 類別不平衡 | zh_TW |
DC.subject | 資料探勘 | zh_TW |
DC.subject | 分類 | zh_TW |
DC.subject | 分群 | zh_TW |
DC.subject | 樣本選取 | zh_TW |
DC.subject | Class imblanace | en_US |
DC.subject | data mining | en_US |
DC.subject | classification | en_US |
DC.subject | clustering | en_US |
DC.subject | instance selection | en_US |
DC.title | 兩階段混合式前處理方法於類別非平衡問題之研究 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | A Two-Stage Hybrid Data Preprocessing Approach for the Class Imbalance Problem | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |