在後續的實驗中,詳細評估了DeInfoReg 模型的性能,證實了其在多種任務和數據集上的優越表現。實驗結果顯示,DeInfoReg 在處理深層結構下的梯度問題方面具有顯著優勢,並且在不同雜訊標籤比例下的抗噪能力也優於傳統BP模型。此外,我們還探討了模型在自適應推理輸出和動態擴增層與新特徵的應用潛力,並提出了未來改進的方向,以進一步提升模型的實用性和泛化能力。這些結果表明DeInfoReg在深度學習領域具有廣泛的應用前景和強大的拓展能力。;Increasing the number of layers to enhance model capabilities has become a clear trend in designing deep neural networks. However, this approach faces various optimization issues, such as vanishing or exploding gradients. We propose a new model, Decoupled Supervised Learning with Information Regularization (DeInfoReg), that decouples the gradients of different blocks to ensure that the gradients of different blocks do not interfere.
DeInfoReg enhances model performance and flexibility by designing new Local Loss and model structures. The new model structure endows the model with an Adaptive Inference Path, Dynamic Expanded Layers, and Dynamic Extended Layers with new features. The Local Loss function measures the information between model output embedding through three regularization methods. Those methods include: ensuring the invariance of the output embedding with true labels, maintaining the variance of output embedding within batch size, and using the covariance to reduce redundancy in the output embedding. This method enables the model to capture features in the data better, thus improving performance.
We evaluate the performance of DeInfoReg through various tasks and datasets. The experimental results demonstrate that DeInfoReg signif cantly addresses gradient issues in deep neural networks and shows superior noise resistance under different proportions of label noise compared to traditional backpropagation. Additionally, we explore the potential applications of the model in Adaptive Inference Paths and Dynamically Expanded Layers with new features. The findings indicate that DeInfoReg has broad application prospects and robust expansion capabilities in deep neural networks. Finally, we discuss future improvements to enhance the model’s practicality and generalization capabilities.