English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 64745/64745 (100%)
造訪人次 : 20424711      線上人數 : 320
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/9287


    題名: 以自我組織特徵映射圖為基礎之模糊系統實作連續性Q-learning;A SOM-based Fuzzy Systems Q-learning in Continuous State and Action Space
    作者: 陳律宇;Lu-Yu Chen
    貢獻者: 資訊工程研究所
    關鍵詞: 任務分解;連續性Q-learning;增強式學習;自我組織特徵映射圖;continuous Q-learning;task decomposition;self-organizing feature map;reinforcement learning
    日期: 2006-07-06
    上傳時間: 2009-09-22 11:44:42 (UTC+8)
    出版者: 國立中央大學圖書館
    摘要: 所謂的增強式學習法(Reinforcement Learning),就是訓練對象與環境互動的過程中,不藉助監督者提供完整的指令下,可以自行發掘在各種狀態下該採取什麼行動才能獲得最大報酬。而Q-learning 是一種常見的增強式學習法,藉由建立每一個狀態對應每一個動作之Q值的查詢表(look-up table),Q-learning 可以順利的處理存在少量離散狀態與動作空間的問題上。但當處理的問題擁有大量的狀態與動作時,所要建立的查詢表便會十分的巨大,所以此種對於每一個狀態-動作建立查詢表的方法便顯得不可行。本論文提出一個以自我組織特 徵映射網路(Self-Organization Feature Map network, SOM network)為基礎的模糊系統來實作Q-learning,並以此方法來設計控制系統。為了加速訓練的過程,本論文結合任務分解(task decomposition)與自動任務分解的機制來處理複雜的任務。藉由機器人的模擬實驗,可以看出此方法的有效性。 In reinforcement learning, there is no supervisor to critically judge the chosen action at each step. The learning is through a trial-and-error procedure interacting with a dynamic environment. Q-learning is one popular approach to reinforcement learning. It is widely applied to problems with discrete states and actions and usually implemented by a look-up table where each item corresponds to a combination of a state and an action. However, the look-up table plementation of Q-learning fails in problems with continuous state and action space because an exhaustive enumeration of all state-action pairs is impossible. In this thesis, an implementation of Q-learning for solving problems with continuous state and action space using SOM-based fuzzy systems is proposed. Simulations of training a robot to complete two different tasks are used to demonstrate the effectiveness of the proposed approach. Reinforcement learning usually is a slow process. In order to accelerate the learning procedure, a hybrid approach which integrates the advantages of the ideas of hierarchical learning and the progressive learning to decompose a complex task into simple elementary tasks is proposed.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 大小格式瀏覽次數
    0KbUnknown1211檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋  - 隱私權政策聲明