中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/77371
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78852/78852 (100%)
Visitors : 38667727      Online Users : 602
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/77371


    Title: Reducing forecasting error under hidden markov model by recurrent neural networks
    Authors: 高子庭;Kao, Tzu-Ting
    Contributors: 統計研究所
    Keywords: 人工類神經網絡;遞迴類神經網絡;隱馬可夫模型;馬可夫轉換模型;預測誤差;監督式學習演算法;artificial neural networks;recurrent neural networks;hidden markov model;markov switching model;forecasting error;supervised learning algorithm
    Date: 2018-07-19
    Issue Date: 2018-08-31 14:35:43 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 在最近幾年, 人工類神經網絡因為在各領域應用中高水平的成果表現已經成為最受歡迎的機器學習方法之一。所以我們想把類神經網絡跟傳統的統計模型做結合, 然後給出一種方法可以結合兩種方法的優勢。在這篇文章中我們有興趣的統計模型是隱馬爾可夫模型以及遞迴類神經網絡。因為我們可以證明在分類問題中遞迴類神經網絡的輸出會逼近一個後驗機率, 所以我們把這個機率放進隱馬可夫模型的演算法中來改善模型參數估計的精確度。這個使用遞迴類神經網絡的訓練演算法其中一個優勢就是將原本的演算法從非監督式變成監督式, 所以在這個新的演算法中我們可以將資料中類別的資訊加進來。在模擬以及真實資料的分析中, 這個新的演算法除了可以增加參數估計的精確度外, 還可以降低參數的標準誤。;In recent year, artificial neural networks became a very popular machine learning method since it’s high levels performance. So we want to combine neural networks and traditional statistical model and give the method which can catch the advantage of both method. Here the statistical model we are interested is the hidden markov
    model, and the artificial neural networks we choose is recurrent neural networks. Since we have proved recurrent neural networks output can approximate a posterior
    probability in classification task, so we put this probability into training process of hidden markov model to improve the accuracy of parameters estimator. The advantage of this algorithm is that we change the original training algorithm from unsupervised to supervised, so we can take the information about data level into training process. The simulation and real data analysis show that this combination training process can not only improve accuracy of parameter estimation and reduce standard error of parameter estimation.
    Appears in Collections:[Graduate Institute of Statistics] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML355View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明