English  |  正體中文  |  简体中文  |  Items with full text/Total items : 66984/66984 (100%)
Visitors : 22628169      Online Users : 395
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/72082


    Title: 分類技術於類別不平衡資料集之研究
    Authors: 龔健生;Kung,Chien-Shen
    Contributors: 資訊管理學系在職專班
    Keywords: 資料探勘;類別不平衡問題;接收者操作特徵曲線;曲線下面積;Data Mining;Class Imbalanced Problem;ROC;AUC
    Date: 2016-06-06
    Issue Date: 2016-10-13 14:25:26 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 在現實的生活所產生的二元分類數據中,大多都存在著不平衡的問題,如:破產資訊、罹患罕見疾病、因意外造成傷亡等。傳統的二元分類演算法,大多在訓練分類器的過程中,常會因類別不平衡而產生預測的偏差進而影響到分類的正確率,其結果也往往會偏向多數類樣本。近年來,學者及研究人員針對類別不平衡問題也提出了相當多的解決方式,卻沒有相關的研究篩選出較適用的基底分類器。
    本研究希望能透過所提出的研究架構,並使用KEEL網站上研究二元分類問題的44個不同比例資料集進行實驗,籍此找出較適用於研究類別不平衡問題的基底分類器,提供學者及研究人員參考。
    ;In our daily life, most of the datasets possess the class imbalance problem, in which one class contains a very large number of data samples whereas another class for a very small number of data samples. On example is bankruptcy information, suffering from rare diseases, due to accidental casualties and so on. In the process of training a classifier, the traditional binary classification algorithms will generate prediction bias because of class imbalanced datasets, and the results also tend to favor the majority class samples. In recent years, a considerable number of scholars raised many solutions for solving the class imbalanced problem.
    In this study, different from related works that proposing novel algorithms to enhance the performances of existing classification techniques, we focus on finding out the best baseline classifier for the class imbalance domain problem. The finding of this study is able to provide the guideline for future research to compare their novel algorithms to the identified baseline classifier.
    The experiments are based on 44 various domain datasets containing different imbalance ratios and three popular classifiers, i.e. J48, MLP, and SVM are constructed and compared. Moreover, classifier ensembles by the bagging and boosting method are also developed. The results show that the bagging based MLP classifier ensembles perform the best in terms of the AUC rate.
    Appears in Collections:[資訊管理學系碩士在職專班 ] 博碩士論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML753View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明