博碩士論文 105225023 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:14 、訪客IP:52.205.167.104
姓名 高子庭(Tzu-Ting Kao)  查詢紙本館藏   畢業系所 統計研究所
論文名稱
(Reducing forecasting error under hidden markov model by recurrent neural networks)
相關論文
★ SABR模型下使用遠期及選擇權資料的參數估計★ 台灣指數上的股價報酬預測性
★ 台灣股票在alpha-TEV frontier上的投資組合探討與推廣★ On Jump Risk of Liquidation in Limit Order Book
★ 結構型商品之創新、評價與分析★ 具有厚尾殘差下 有效地可預測性檢定
★ A Dynamic Rebalancing Strategy for Portfolio Allocation★ A Multivariate Markov Switching Model for Portfolio Optimization
★ 漸進最佳變點偵測在金融科技網路安全之分析★ Empirical Evidences for Correlated Defaults
★ 金融市場結構轉換次數的偵測★ 重點重覆抽樣下拔靴法估計風險值-以台泥華碩股票為例
★ 在DVEC-GARCH模型下風險值的計算與實證研究★ 資產不對稱性波動參數的誤差估計與探討
★ 公司營運狀況與員工股票選擇權之關係★ 結合買權改進IGARCH模型之參數估計
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 在最近幾年, 人工類神經網絡因為在各領域應用中高水平的成果表現已經成為最受歡迎的機器學習方法之一。所以我們想把類神經網絡跟傳統的統計模型做結合, 然後給出一種方法可以結合兩種方法的優勢。在這篇文章中我們有興趣的統計模型是隱馬爾可夫模型以及遞迴類神經網絡。因為我們可以證明在分類問題中遞迴類神經網絡的輸出會逼近一個後驗機率, 所以我們把這個機率放進隱馬可夫模型的演算法中來改善模型參數估計的精確度。這個使用遞迴類神經網絡的訓練演算法其中一個優勢就是將原本的演算法從非監督式變成監督式, 所以在這個新的演算法中我們可以將資料中類別的資訊加進來。在模擬以及真實資料的分析中, 這個新的演算法除了可以增加參數估計的精確度外, 還可以降低參數的標準誤。
摘要(英) In recent year, artificial neural networks became a very popular machine learning method since it’s high levels performance. So we want to combine neural networks and traditional statistical model and give the method which can catch the advantage of both method. Here the statistical model we are interested is the hidden markov
model, and the artificial neural networks we choose is recurrent neural networks. Since we have proved recurrent neural networks output can approximate a posterior
probability in classification task, so we put this probability into training process of hidden markov model to improve the accuracy of parameters estimator. The advantage of this algorithm is that we change the original training algorithm from unsupervised to supervised, so we can take the information about data level into training process. The simulation and real data analysis show that this combination training process can not only improve accuracy of parameter estimation and reduce standard error of parameter estimation.
關鍵字(中) ★ 人工類神經網絡
★ 遞迴類神經網絡
★ 隱馬可夫模型
★ 馬可夫轉換模型
★ 預測誤差
★ 監督式學習演算法
關鍵字(英) ★ artificial neural networks
★ recurrent neural networks
★ hidden markov model
★ markov switching model
★ forecasting error
★ supervised learning algorithm
論文目次 Contents
摘要 i
Abstract ii
誌謝 iii
1 Introduction 1
2 Background 3
2.1 Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Recurrent Neural Networks (RNNs) . . . . . . . . . . . . . . . 3
2.1.2 Back Propagation Through Time . . . . . . . . . . . . . . . . 8
2.1.3 The challenge of long-term dependencies . . . . . . . . . . . . 11
2.1.4 Long-short term memory . . . . . . . . . . . . . . . . . . . . 12
2.2 Hidden Markov Model (HMM) . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Elements of HMMs . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.2 The Three Problems for HMMs . . . . . . . . . . . . . . . . . 22
2.2.3 Solutions of the three problems of HMMs . . . . . . . . . . . 24
3 Neural Networks in HMMs 30
3.1 Output of neural network on classification task . . . . . . . . . . . . 30
3.1.1 Classification and Bayesian probabilities . . . . . . . . . . . . 30
3.1.2 Cost function . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Discriminant HMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.1 Multi-Layer Perceptron with sequential input . . . . . . . . . 35
3.3 Combine RNNs and HMM . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Simulation 40
4.1 Model and parameters setting . . . . . . . . . . . . . . . . . . . . . . 40
4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5 Real data analysis 48
5.1 Overview of data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6 Conclusion 51
References 53
參考文獻 Batzoglou, S., L. Pachter, J. P. Mesirov, B. Berger, and E. S. Lander (2000). Human
and mouse gene structure: comparative analysis and application to exon prediction.
Genome research 10(7), 950–958.
Bourlard, H. and N. Morgan (1993). Continuous speech recognition by connectionist
statistical methods. IEEE Transactions on Neural Networks 4(6), 893–909.
Bourlard, H. and C. J. Wellekens (1989). Links between markov models and multilayer
perceptrons. In Advances in neural information processing systems, pp.
502–510.
Cho, K., B. Van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk,
and Y. Bengio (2014). Learning phrase representations using rnn encoder-decoder
for statistical machine translation. arXiv preprint arXiv:1406.1078.
Girshick, R. (2015). Fast r-cnn. arXiv preprint arXiv:1504.08083.
Graves, A., A.-r. Mohamed, and G. Hinton (2013). Speech recognition with deep
recurrent neural networks. In Acoustics, speech and signal processing (icassp),
2013 ieee international conference on, pp. 6645–6649. IEEE.
Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica: Journal of the Econometric
Society, 357–384.
He, K., X. Zhang, S. Ren, and J. Sun (2016). Deep residual learning for image
recognition. In Proceedings of the IEEE conference on computer vision and pattern
recognition, pp. 770–778.
Mohanty, S. P., D. P. Hughes, and M. Salathe (2016). Using deep learning for
image-based plant disease detection. Frontiers in plant science 7, 1419.
Pedersen, J. S. and J. Hein (2003). Gene finding with a hidden markov model of
genome structure and evolution. Bioinformatics 19(2), 219–227.
Rabiner, L. R. (1989). A tutorial on hidden markov models and selected applications
in speech recognition. Proceedings of the IEEE 77(2), 257–286.
Rajpurkar, P., J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul,
C. Langlotz, K. Shpanskaya, et al. (2017). Chexnet: Radiologist-level pneumonia
detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225.
Richard, M. D. and R. P. Lippmann (1991). Neural network classifiers estimate
bayesian a posteriori probabilities. Neural computation 3(4), 461–483.
Sinha, A., H. Namkoong, and J. Duchi (2017). Certifiable distributional robustness
with principled adversarial training. arXiv preprint arXiv:1710.10571.
Weng, T.-W., H. Zhang, P.-Y. Chen, J. Yi, D. Su, Y. Gao, C.-J. Hsieh, and L. Daniel
(2018). Evaluating the robustness of neural networks: An extreme value theory
approach. arXiv preprint arXiv:1801.10578.
指導教授 傅承德(Cheng-Der Fuh) 審核日期 2018-7-19
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明