English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41635396      線上人數 : 1395
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/77514


    題名: 分群式前處理方法於類別不平衡問題之研究;Clustering-based Data Preprocessing Approach for the Class Imbalance Problem
    作者: 潘怡瑩;Pan, Yi-Ying
    貢獻者: 資訊管理學系
    關鍵詞: 類別不平衡;資料探勘;分類;分群;class imbalance;data mining;clustering;classification
    日期: 2018-06-21
    上傳時間: 2018-08-31 14:46:40 (UTC+8)
    出版者: 國立中央大學
    摘要: 類別不平衡(Class Imbalance)問題一直是資料探勘領域中重要且頻繁發生的議題,此情況發生於資料集中某一類別樣本數遠大於另一類別樣本數時,導致資料產生偏態分布,此時一般分類器為了追求高分類正確率,建立出的預測模型將會傾向將小類資料(Minority Class)誤判為大類資料(Majority Class),導致無法建立出良好的小類資料分類規則,而這樣的現象在真實世界中也越來越常見,舉凡醫學診斷、錯誤偵測、臉部辨識等不同領域都經常發生資料的類別不平衡現象。
    本論文提出以分群技術為基礎的前處理方法,利用分群技術將大類資料分為數個子群,形成多類別資料(Multiclass Data),此架構能夠有效降低訓練資料集的類別不平衡比率、減少分類器訓練時間及提升分類正確率。
    本論文實驗了44個KEEL小型資料集與8個NASA高維度資料集,方法架構中分別使用兩種分群技術(Affinity Propagation、K-means),並搭配五種分類器(C4.5、MLP、Naïve Bayse、SVM、k-NN(k=5))及整體學習法建立分類模型,比較不同分群方式、不同分類器與不同分群數設定的AUC(Area Under Curve)結果,找出分群式前處理架構的最佳配適,再與文獻中傳統方法、整體學習法進行正確率比較。最後,KEEL資料集的實驗結果顯示無論是搭配Affinity Propagation或K-means(K=5),k-NN(k=5)演算法是最佳選擇;而NASA資料集實驗結果則顯示,分群式前處理架構應用於高維度資料集表現亦優於文獻。
    ;The class imbalance problem is an important issue in data mining. It occurs when the number of samples in one class is much larger than the other classes. Traditional classifiers tend to misclassify most samples of the minority class into the majority class for maximizing the overall accuracy. This phenomenon makes it hard to establish a good classification rule for the minority class. The class imbalance problem often occurs in many real world applications, such as fault diagnosis, medical diagnosis and face recognition.
    To deal with the class imbalance problem, a clustering-based data preprocessing approach is proposed, where two different clustering techniques including affinity propagation clustering and K-means clustering are used individually to divide the majority class into several subclasses resulting in multiclass data. This approach can effectively reduce the class imbalance ratio of the training dataset, shorten the class training time and improve classification performance.
    Our experiments based on forty-four small class imbalance datasets from KEEL and eight high-dimensional datasets from NASA to build five types of classification models, which are C4.5, MLP, Naïve Bayes, SVM and k-NN (k=5). In addition, we also employ the classifier ensemble algorithm. This research tries to compare AUC results between different clustering techniques, different classification models and the number of clusters of K-means clustering in order to find out the best configuration of the proposed approach and compare with other literature methods. Finally, the experimental results of the KEEL datasets show that k-NN (k=5) algorithm is the best choice regardless of whether affinity propagation or K-means (K=5); the experimental results of NASA datasets show that the performance of the proposed approach is superior to the literature methods for the high-dimensional datasets.
    顯示於類別:[資訊管理研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML123檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明