中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/81907
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41646950      Online Users : 2348
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/81907


    Title: 關聯式學習:利用自動編碼器與目標傳遞法分解端到端倒傳遞演算法;Associated Learning: Decomposing End-to-end Backpropagation based on Auto-encoders and Target Propagation
    Authors: 高聿緯;Kao, Yu-Wei
    Contributors: 資訊工程學系
    Keywords: 生物合理性演算法;深度學習;平行運算;模組化;Biologically plausible algorithm;Deep learning;Parallel computing;Modularization
    Date: 2019-11-08
    Issue Date: 2020-01-07 14:36:42 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 倒傳遞演算法已被廣泛的運用在深度學習上,但因為有傳遞鎖與梯度
    消失、爆炸的問題,它不是有效率且穩定的演算法,在較深的網路架構
    更可以觀察到這些現象。此外,單單只用一個目標來更新神經網路中的
    參數在生物學來說並非合理的。
    在本篇論文中,我們提出了一種新穎且受生物學啟發的學習架構,名
    為「關聯式學習」,這個訓練方式將原有的神經網路模組化成小單元,每
    個小單元都有自己的局部目標,又因為這些單元兩兩獨立,關聯式學習
    能夠獨立且同時訓練彼此獨立的參數。
    令人驚訝的是,利用關聯式學習訓練的準確度,也能與直接使用目標
    訓練的傳統倒傳遞演算法相當,此外,可能是因為模組內的梯度流較短,
    關聯式學習也能訓練用sigmoid 當作活化函數的深度學習網路,然而若
    是用倒傳遞演算法訓練這類網路會容易導致梯度消失。
    我們也透過觀察隱藏層中的類間與類內距離,以及t-SNE 來呈
    現數量上與品質上的結果,發現聯想式學習能夠生成更好的間特徵
    (Metafeatures)。;Backpropagation has been widely used in deep learning approaches, but
    it is inefficient and sometimes unstable because of backward locking and
    vanishing/exploding gradient problems, especially when the gradient flow
    is long. Additionally, updating all edge weights based on a single objective
    seems biologically implausible. In this paper, we introduce a novel biologically
    motivated learning structure called Associated Learning, which
    modularizes the network into smaller components, each of which has a local
    objective. Because the objectives are mutually independent, Associated
    Learning can learn the parameters independently and simultaneously when
    these parameters belong to different components. Surprisingly, training
    deep models by Associated Learning yields comparable accuracies to models
    trained using typical backpropagation methods, which aims at fitting
    the target variable directly. Moreover, probably because the gradient flow
    of each component is short, deep networks can still be trained with Associated
    Learning even when some of the activation functions are sigmoid—a
    situation that usually results in the vanishing gradient problem when using
    typical backpropagation. We also found that the Associated Learning generates better metafeatures, which we demonstrated both quantitatively
    (via inter-class and intra-class distance comparisons in the hidden layers)
    and qualitatively (by visualizing the hidden layers using t-SNE).
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML159View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明