中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/77561
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 41085175      在线人数 : 716
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/77561


    题名: 特徵選取於資料離散化之影響;Feature Selection in Data Discretization
    作者: 陳鈺錡;CHEN, Yu-Chi
    贡献者: 資訊管理學系
    关键词: 資料離散化;特徵選取;資料探勘;分類;連續型屬性;機器學習;Discretization;Feature Selection;Classification;Continuous Attributes;Data Mining;Machine Learning
    日期: 2018-07-05
    上传时间: 2018-08-31 14:48:27 (UTC+8)
    出版者: 國立中央大學
    摘要: 真實的世界中,資料往往沒有想像中的「乾淨」,因此我們需要透過資料前處理(Data Pre-processing),來確保資料品質。真實世界可能遇到資料維度過高,資料中摻雜不相關屬性或冗餘值,同時還包含許多複雜難以理解的連續型數值屬性,若直接使用可能大幅降低模型的預測能力。過去研究顯示,資料前處理中的離散化 (Discretization)方法將數值型屬性的資料轉換成為類別型屬性,有助於提升模型準確度、效能,使資料更平滑,減少雜訊、避免過度訓練。另外特徵選取(Feature Selection)為實務上經常使用的資料前處理技術,此方法能降低運算複雜度,取得具有代表性的特徵值,提升預測準確度。目前相關研究較少討論離散化與特徵選取這兩種資料前處理方法合併處理的議題,因此本論文欲探討使用離散化與特徵選取進行資料前處理流程之最佳組合。
    本研究使用較具指標性離散化方法:等寬離散化(Equal-Width Discretization,EWD)、等頻離散化 (Equal-Frequency Discretization,EFD)、最小描述長度原則(Minimum Description Length Principle,MDLP)、卡方分箱法(ChiMerge,ChiM),特徵選取方法:基因演算法(Genetic Algorithm, GA)、決策樹C4.5(Decision Tree C4.5, DT)、主成分分析(Principal Components Analysis, PCA),探討離散化方法與特徵選取之間的優劣配適以及順序性。本研究使用資料集來自UCI Dataset上的10種資料集,資料之維度介於8到90維,分類問題介於2到28類,實驗結果為C5.0與SVM分類器預測之平均準確度。
    根據本研究實驗結果,平均表現最佳的離散化方法為MDLP。先進行特徵選取再進行離散化模型平均預測準確度優於先進行離散化再進行特徵選取。先進行特徵選取再進行離散化的執行順序下,無論是使用SVM或C5.0作為分類器,先採用C4.5特徵選取,再用MDLP離散化為本論文最推薦之組合,其準確度可高達80.1%。
    ;In the reality, data are always not “clean” as we thought. Thus, we need to figure out and ensure data quality by data pre-processing. There are many problems that we must be solved, like high dimensional data may include irrelevant and redundant features (attributes of the data). Besides, data may include many continuous attributes that would be hard to understand and explain. If people use these “unclean” data, it might decrease model prediction performance dramatically.
    Previous researches show advantages derived from discretization are the reduction and the simplification of data, making the model learning faster and yielding more accurate, compact and shorter results; and noise information possibly presents in the data is reduced. It could avoid overfitting and let the data curve smoothly. In addition, feature selection is a common method for data pre-processing. By this way, it can reduce the time complexity during model training and identify important features to improve the classification accuracy of the model. Currently, there are few researches discussing the pre-processing methods by combining discretization and feature selection at the same time. Thus, this paper focuses on the optimal combination of data pre-processing by discretization and feature selection.
    The experiment exploits three popular feature selection methods, which are GA(Genetic Algorithm), DT(Decision Tree Algorithm), and PCA(Principal Components Analysis). In this experiment, EWD(Equal-width discretization), EFD(Equal-frequency discretization), MDLP(Minimum Description Length Principle), and ChiMerge are used for discretization.
    In order to explore the optimal combination of discretization and feature selection, the data are collected from 10 UCI Datasets. The data dimensions are from 8 to 90 and the classification problems contains 2 to 28 classes. The comparative results are based on the average accuracy by C5.0 and SVM classifiers. Our empirical results show that the MDLP discretization method gives the best predictive performance.
    To conclude, implementing feature selection before discretization can make classifiers provide higher accuracy than the ones by discretization alone. Moreover, no matter which classifier is utilized (i.e. C5.0 or SVM), combining feature selection by C4.5 first and discretization by MDLP second is the most recommended combination in this thesis. The combination could make “the average classification accuracy of the model” reaches 80.1%.
    显示于类别:[資訊管理研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML153检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明