中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/81062
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 40762429      線上人數 : 2719
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/81062


    題名: 動態多模型融合分析研究;Dynamic Ensemble Learning Research
    作者: 蘇俊儒;Su, Jun-Ru
    貢獻者: 資訊工程學系
    關鍵詞: 多模型融合;動態多模型融合;監督式學習;ensemble learning;dynamic ensemble learning;supervised learning
    日期: 2019-07-02
    上傳時間: 2019-09-03 15:31:56 (UTC+8)
    出版者: 國立中央大學
    摘要: 現今多模型的整合大多採用固定策略,在訓練過後,多個基礎模型將以「靜態」的方式作融合,即:不會因為待測樣本的特徵不同而改變基礎模型的融合方式。但在現實的訓練情境中,單一模型可能只擅長於預測特定特徵分佈的樣本。由於各個樣本的特徵分佈不盡相同,只採用「靜態」融合的策略可能是過於天真的。
    主流多模型融合大多假設單一基礎模型對不同數據的預測的能力大致相同,本論文想嘗試設計「動態」的融合學習以彌補這個假設可能造成的缺陷。我們已經嘗試了五種不一樣的方法,分別根據(1) 基礎模型判斷類別的機率;(2) 將基礎模型判斷轉換成損失;(3) 根據樣本空間的判斷能力;(4) 根據樣本空間的答對個數;及(5) 加入分類器判斷正確屬
    性,以這五種不同的方法來實做「動態」融合。
    本文將說明我們設計的五種方法,並在人工生成資料集、車險資料集、Fahsion-MNIST 以及Kuzushiji-MNIST 上的實驗結果。我們設計的融合方法的預測準確度均優於基礎模型,這說明動態的多模型融合是可行的。然而,與理想Model 相比,結果相差甚遠,在訓練額外屬性學習器上還有加強的空間。;Nowadays most of the ensemble learning methods apply a static strategy to integrate the base learners. After training, base learners are merged in a “static”manner, that is, the basic models will not adapt the fusion
    strategy to the different feature distribution of the samples to be tested. However, in a realistic training scenario, a single model may only be good at predicting samples of a particular feature distribution. Since the features of each sample are distributed differently, the strategy of using only “static”fusion may be over-naïve.
    The mainstream ensemble models mostly assume that the ability of a single base model to predict different data is roughly the same. This paper attempts to design a“dynamic”ensemble model to compensate for the shortcomings of this hypothesis. We have tried five different methods, based on (1) the category probability predicted by the base learners; (2)the loss of the base learners; (3) the percentage of correctness of the nearby
    samples predicted by the base learners ; (4) the numbers of correctness of the nearby samples predicted by the base learners ; and (5) adding extra features about which base learner correctly predict the right label. These
    five methods realize the “dynamic”ensemble.
    This article will explain the five methods we designed and the experimental results based on a simulated dataset and three real datasets, including the Allstate dataset, the Fashion-MNIST dataset, and the Kuzushiji-MNIST dataset. We found that all five ensemble methods perform better than each of the single base learners. However, if we compare our method with an ideal model, the result is not good enough. Therefore, it may still be possible to improve our methods by training the leaner with extra
    features.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML81檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明