English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 78818/78818 (100%)
造訪人次 : 34726085      線上人數 : 1556
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/81907


    題名: 關聯式學習:利用自動編碼器與目標傳遞法分解端到端倒傳遞演算法;Associated Learning: Decomposing End-to-end Backpropagation based on Auto-encoders and Target Propagation
    作者: 高聿緯;Kao, Yu-Wei
    貢獻者: 資訊工程學系
    關鍵詞: 生物合理性演算法;深度學習;平行運算;模組化;Biologically plausible algorithm;Deep learning;Parallel computing;Modularization
    日期: 2019-11-08
    上傳時間: 2020-01-07 14:36:42 (UTC+8)
    出版者: 國立中央大學
    摘要: 倒傳遞演算法已被廣泛的運用在深度學習上,但因為有傳遞鎖與梯度
    消失、爆炸的問題,它不是有效率且穩定的演算法,在較深的網路架構
    更可以觀察到這些現象。此外,單單只用一個目標來更新神經網路中的
    參數在生物學來說並非合理的。
    在本篇論文中,我們提出了一種新穎且受生物學啟發的學習架構,名
    為「關聯式學習」,這個訓練方式將原有的神經網路模組化成小單元,每
    個小單元都有自己的局部目標,又因為這些單元兩兩獨立,關聯式學習
    能夠獨立且同時訓練彼此獨立的參數。
    令人驚訝的是,利用關聯式學習訓練的準確度,也能與直接使用目標
    訓練的傳統倒傳遞演算法相當,此外,可能是因為模組內的梯度流較短,
    關聯式學習也能訓練用sigmoid 當作活化函數的深度學習網路,然而若
    是用倒傳遞演算法訓練這類網路會容易導致梯度消失。
    我們也透過觀察隱藏層中的類間與類內距離,以及t-SNE 來呈
    現數量上與品質上的結果,發現聯想式學習能夠生成更好的間特徵
    (Metafeatures)。;Backpropagation has been widely used in deep learning approaches, but
    it is inefficient and sometimes unstable because of backward locking and
    vanishing/exploding gradient problems, especially when the gradient flow
    is long. Additionally, updating all edge weights based on a single objective
    seems biologically implausible. In this paper, we introduce a novel biologically
    motivated learning structure called Associated Learning, which
    modularizes the network into smaller components, each of which has a local
    objective. Because the objectives are mutually independent, Associated
    Learning can learn the parameters independently and simultaneously when
    these parameters belong to different components. Surprisingly, training
    deep models by Associated Learning yields comparable accuracies to models
    trained using typical backpropagation methods, which aims at fitting
    the target variable directly. Moreover, probably because the gradient flow
    of each component is short, deep networks can still be trained with Associated
    Learning even when some of the activation functions are sigmoid—a
    situation that usually results in the vanishing gradient problem when using
    typical backpropagation. We also found that the Associated Learning generates better metafeatures, which we demonstrated both quantitatively
    (via inter-class and intra-class distance comparisons in the hidden layers)
    and qualitatively (by visualizing the hidden layers using t-SNE).
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML191檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明