中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/81372
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41678698      線上人數 : 1515
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/81372


    題名: 搭配優先經驗回放與經驗分類於深度強化式學習之記憶體減量技術;Memory Reduction through Experience Classification with Prioritized Experience Replay in Deep Reinforcement Learning
    作者: 沈楷桓;Shen, Kai-Huan
    貢獻者: 電機工程學系
    關鍵詞: 深度強化式學習;深度確定性決策梯度;優先經驗回放;Deep reinforcement learning;Deep deterministic policy gradient;Prioritized experience replay
    日期: 2019-06-25
    上傳時間: 2019-09-03 15:49:01 (UTC+8)
    出版者: 國立中央大學
    摘要: 優先經驗回放在許多 強化式線上學習演算法中被廣泛的使用,使過往經驗能夠被更有效率的利用,但在其中使用大量記憶體顯著的消耗了系統存儲。因此,在這份研究中我們提出了一個分區及分類機制減輕上述的效應。由計算學習歷程中所得經驗的時差誤差(temporal-difference error)可以得知若某些情況尚未學會,將導致其誤差較大。而我們利用這個資訊設計出多個區段來控管記憶體複寫經驗:將上述經驗的時差誤差排序組成累積分布函數,並導入一個新的超參數(hyper-parameter S)控制區段數量,將累積分布函數分成S個區段且算出各區段的成員數量與其邊界;新的經驗將根據其時差誤差大小與所屬區段中的經驗的記憶體位址交換資料來改變其存活時間;另訂出一隨記憶體複寫指標(write-data pointer)移動之凍結區(frozen region),所有歸於此區記憶體位址的經驗將不會被置換,以免經驗透過交換存活過久;隨著網路更新,此分區資訊將週期性更新,避免使用過時資訊控制記憶體。透過這個機制改變已學會經驗與延長高價值經驗的存活時間;而上述分區及分類機制由時間差分誤差的資訊計算取得。所提出的機制結合深度確定性決策梯度演算法於倒立擺和倒立雙擺問題驗證。從實驗可知,我們所提出的機制有效的減少記憶體中冗餘內容,降低記憶體內資料的相關性並以額外的時間差分誤差計算為代價達到更佳的學習效能和更少的記憶體使用量。;Prioritized experience replay has been widely used in many online reinforcement learning algorithms, providing high efficiency in exploiting past experiences. However, a large replay buffer consumes system storage significantly. Thus, in this paper, a segmentation and classification scheme is proposed to change the lifetime of the well-learned experiences and the valuable experiences. The segmentation and classification are achieved by using the information of temporal-difference errors (TD errors). With the ranking of TD errors, we further calculate the cumulative density function and separate it into S segments, where S is a newly introduced hyper-parameter; then, obtain the information of each segment. The incoming new experiences will be classified to the corresponding segments according to their TD error and will be swapped with the data in the same segment. Besides, we define a frozen region, which follows the write-data pointer of the replay buffer, to avoid experiences living too long with the proposed scheme. The proposed scheme is incorporated in the Deep Deterministic Policy Gradient (DDPG) algorithm, and the Inverted Pendulum and Inverted Double Pendulum tasks are used for verification. From the experiments, our proposed mechanism can effectively remove the buffer redundancy. Thus, better learning performance with reduced memory size is achieved at the cost of additional computations of updated TD errors.
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML119檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明