倒傳遞演算法已被廣泛的運用在深度學習上,但因為有傳遞鎖與梯度 消失、爆炸的問題,它不是有效率且穩定的演算法,在較深的網路架構 更可以觀察到這些現象。此外,單單只用一個目標來更新神經網路中的 參數在生物學來說並非合理的。 在本篇論文中,我們提出了一種新穎且受生物學啟發的學習架構,名 為「關聯式學習」,這個訓練方式將原有的神經網路模組化成小單元,每 個小單元都有自己的局部目標,又因為這些單元兩兩獨立,關聯式學習 能夠獨立且同時訓練彼此獨立的參數。 令人驚訝的是,利用關聯式學習訓練的準確度,也能與直接使用目標 訓練的傳統倒傳遞演算法相當,此外,可能是因為模組內的梯度流較短, 關聯式學習也能訓練用sigmoid 當作活化函數的深度學習網路,然而若 是用倒傳遞演算法訓練這類網路會容易導致梯度消失。 我們也透過觀察隱藏層中的類間與類內距離,以及t-SNE 來呈 現數量上與品質上的結果,發現聯想式學習能夠生成更好的間特徵 (Metafeatures)。;Backpropagation has been widely used in deep learning approaches, but it is inefficient and sometimes unstable because of backward locking and vanishing/exploding gradient problems, especially when the gradient flow is long. Additionally, updating all edge weights based on a single objective seems biologically implausible. In this paper, we introduce a novel biologically motivated learning structure called Associated Learning, which modularizes the network into smaller components, each of which has a local objective. Because the objectives are mutually independent, Associated Learning can learn the parameters independently and simultaneously when these parameters belong to different components. Surprisingly, training deep models by Associated Learning yields comparable accuracies to models trained using typical backpropagation methods, which aims at fitting the target variable directly. Moreover, probably because the gradient flow of each component is short, deep networks can still be trained with Associated Learning even when some of the activation functions are sigmoid—a situation that usually results in the vanishing gradient problem when using typical backpropagation. We also found that the Associated Learning generates better metafeatures, which we demonstrated both quantitatively (via inter-class and intra-class distance comparisons in the hidden layers) and qualitatively (by visualizing the hidden layers using t-SNE).