中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/89045
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41654855      線上人數 : 2329
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/89045


    題名: 在資料不平衡下提升分類器性能之策略研究;A study on strategies of improving performance under class imbalanced problem
    作者: 陳詠俊;Chen, Yung-Chun
    貢獻者: 工業管理研究所
    關鍵詞: 分類;資料類別不平衡;成本敏感方法;穩定性;classification;class imbalanced problem;cost-sensitive methods;stability
    日期: 2022-07-11
    上傳時間: 2022-10-04 10:49:15 (UTC+8)
    出版者: 國立中央大學
    摘要: 分類問題是在機器學習上相當重要的一個研究主題,透過模型我們可以自動地將
    龐大資料中的標籤分類出來,讓決策者能省下大量的時間就從交易紀錄、機台資料等
    來源中得到可用的資訊。當中類別不平衡(Class imbalanced)是相當重要的一個問
    題,若資料中的類別數量相差較大,會使得模型難以正確的分類。過去的研究已經提
    出了相當多的方法來改善此問題,但主要都著重在分類指標分數的提高,對於改善時
    產生的潛在變異性著墨較少。忽略使用改善方法造成的不穩定性,決策者依照模型分
    類出的結果可能受到訓練資料不同而有較大差異,造成決策上的錯估。本篇研究嘗試
    探討在類別不平衡的情況下,找到一種可以穩定提高分類器表現的策略。希望此策略
    能協助決策者做出穩健的決策,而不必擔心訓練當中可能的不確定。
    在本篇研究中,我們以兩種真實資料來呈現類別不平衡問題。設計不同類別數量
    的不平衡比例,或是資料集大小,檢視對分類器的影響。當中使用了三種常見的分類
    器,分別是 Logistic Regression、Support Vector Machine 以及 Random Forest。根據實驗
    結果,我們從中試著找到影響提高模型表現時的穩定與否的主要原因,並提出一個用
    以量測穩定性的指標。最後,我們提出一套能讓模型在類別不平衡下穩定的提高表現
    的策略。
    ;Classification is one of common topic in machine learning. We can automatically
    recognize the labels by the classification models. It saves lots of time and make the massive
    information from digital transaction or machine log being usable. Class imbalanced problem
    is one of the most important and popular issue in this field. Under imbalanced ratio of classes,
    the classifiers can’t make classification very well. Researchers have been proposed several
    methods to solve this problem. However, most of methods only focus on the enhancement of
    certain measurements. Ignoring the variation of results, decision makers may face a trouble
    that over or underestimating the classifies due to different training datasets, leading to an
    unsuitable decision. In this study, we try to find a strategy to improve the performance of
    classifiers stably under class imbalanced. With this strategy, decision makers can make a
    robust decision without worrying about the huge variation of classification results.
    We conduct a series of experiments with two real-world datasets to present the class
    imbalanced problem in this study, including the situation which being used different
    imbalanced ratios and sizes of datasets. Three classification models are used in the
    experiments, that is Logistic Regression, Support Vector Machine and Random Forest models.
    We examine the effects of Cost-sensitive and Under-sampling methods with these three
    models. According to the results of experiments, we try to find the main causes to stability
    and propose a method to describe the stability of improvement methods. In the end, we
    conduct a strategy to raising the ability of classifiers in a stable way
    顯示於類別:[工業管理研究所 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML67檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明