DC 欄位 |
值 |
語言 |
DC.contributor | 資訊管理學系 | zh_TW |
DC.creator | 蔡子涵 | zh_TW |
DC.creator | Tzu-Han Tsai | en_US |
dc.date.accessioned | 2019-8-21T07:39:07Z | |
dc.date.available | 2019-8-21T07:39:07Z | |
dc.date.issued | 2019 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=106423051 | |
dc.contributor.department | 資訊管理學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 機器學習算法是一類從數據中自動分析獲得規律,並利用規律對未知數據進行預測的算法。機器學習已廣泛應用於數據挖掘、計算機視覺、自然語言處理、生物特徵識別、搜尋引擎、醫學診斷、檢測信用卡欺詐、證券市場分析等等,網路時代來臨時帶動了數據量的的成長,但當在設計神經網路時針對某一項資料集設計一個神經網絡架構需要專業的知識、時間以及電算資源。每一個神經網路都是通過專家許多專業知識還有一次又一次的仔細的實驗或是從少數現有的優秀神經網絡更改其架構而來。為了加速建構神經網路的建構,我們建構了一套系統HILL-CLIMBING MODEL;這是一種基於強化學習的建模算法,可以給定強化學習中學習任務自動生成表現優異的神經網路架構。使用強化學習的訓練並搭配使用Epsilon貪婪的探索策略和經驗回放的DQN讓強化學習經由這些經驗與策略生成表現優異的神經網路。強化學習搭配貪婪式的探索加強了架構的可能性,並經由迭代地發現具有改進的學習任務的設計。即使在圖像分類基準測試中,強化學習的網絡也可以像設計的現有網絡那樣做得一樣好,而且效率更高。 | zh_TW |
dc.description.abstract | Designing neural network (NN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce HCM, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing NN architectures for a given learning task. The learning agent is trained to sequentially choose NN layers using DQN with an ɛ-greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. Even on image classification benchmarks, the agent-designed networks can do good as existing networks designed but more efficient. We also outperform existing meta-modeling approaches for network design on image classification or regression tasks. | en_US |
DC.subject | 機器學習 | zh_TW |
DC.subject | 神經網路 | zh_TW |
DC.subject | 強化學習 | zh_TW |
DC.subject | 神經網路架構 | zh_TW |
DC.subject | Machine learning | en_US |
DC.subject | Neural network | en_US |
DC.subject | Reinforcement learning | en_US |
DC.subject | Neural network architecture | en_US |
DC.title | A DQN-Based Reinforcement Learning Model for Neural Network Architecture Search | en_US |
dc.language.iso | en_US | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |