English  |  正體中文  |  简体中文  |  Items with full text/Total items : 67621/67621 (100%)
Visitors : 23053176      Online Users : 371
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version

    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/74795

    Title: 兩階段混合式前處理方法於類別非平衡問題之研究;A Two-Stage Hybrid Data Preprocessing Approach for the Class Imbalance Problem
    Authors: 姚冠廷;Yao, Guan-Ting
    Contributors: 資訊管理學系
    Keywords: 類別不平衡;資料探勘;分類;分群;樣本選取;Class imblanace;data mining;classification;clustering;instance selection
    Date: 2017-07-14
    Issue Date: 2017-10-27 14:39:34 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 類別非平衡(Class Imbalance)問題是資料探勘領域中重要且頻繁發生的議題,此現象發生於資料集中某一類別樣本數大於另一類別樣本數時,導致資料產生偏態分布,此時,傳統分類器為了追求高分類正確率,建立出的預測模型將會傾向將小類樣本(Minority Class)誤判為大類樣本(Majority Class),導致珍貴的少類樣本無法建立出良好的分類規則,這樣的現象在真實世界中也越來越常見,舉凡醫學診斷、錯誤偵測、臉部辨識等不同領域都經常發生資料的類別非平衡現象。

    為了解決類別非平衡問題,本論文提出一個以分群技術為基礎結合樣本選取(Instance Selection)的資料取樣概念,嘗試從大類樣本挑選出具有代表性的資料,形成一個兩階段混合式的資料前處理架構,這樣的架構除了有效減少抽樣誤差、降低資料的類別非平衡比率(Imbalance Ratio)、減少分類器的訓練時間外,還可以提升分類的正確率。

    ;The class imbalance problem is an important issue in data mining. The class skewed distribution occurs when the number of examples that represent one class is much lower than the ones of the other classes. The traditional classifiers tend to misclassify most samples in the minority class into the majority class because of maximizing the overall accuracy. This phenomenon limits the construction of effective classifiers for the precious minority class. This problem occurs in many real-world applications, such as fault diagnosis, medical diagnosis and face recognition.

    To deal with the class imbalance problem, I proposed a two-stage hybrid data preprocessing framework based on clustering and instance selection techniques. This approach filters out the noisy data in the majority class and can reduce the execution time for classifier training. More importantly, it can decrease the effect of class imbalance and perform very well in the classification task.

    Our experiments using 44 class imbalance datasets from KEEL to build four types of classification models, which are C4.5, k-NN, Naïve Bayes and MLP. In addition, the classifier ensemble algorithm is also employed. In addition, two kinds of clustering techniques and three kinds of instance selection algorithms are used in order to find out the best combination suited for the proposed method. The experimental results show that the proposed framework performs better than many well-known state-of-the-art approaches in terms of AUC. In particular, the proposed framework combined with bagging based MLP ensemble classifiers perform the best, which provide 92% of AUC.
    Appears in Collections:[資訊管理研究所] 博碩士論文

    Files in This Item:

    File Description SizeFormat

    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明