中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/95483
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 40304913      在线人数 : 508
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95483


    题名: 訊息正則化的監督解耦式學習之研究;Decoupled Supervised Learning with Information Regularization
    作者: 黃梓豪;Huang, Zih-Hao
    贡献者: 資訊工程學系
    关键词: 監督式學習;解耦式範例;正則化的局部損失函數;自適應推理輸出;動態擴增層;動態增加新特徵;Supervised Learning;Decoupled Paradigm;Regularized Local Loss;Adaptive Inference Path;Dynamic Extended Layers;Dynamic Extended Layers with new features
    日期: 2024-07-15
    上传时间: 2024-10-09 16:53:46 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,在端到端反向傳遞(End-to-End BackPropagation, BP) 的深度神經網絡結構設計中,通過增加層數來提高模型識別能力成為一個
    明顯的趨勢,但也面臨梯度爆炸與消失等BP問題。因此,我們提出了新的解耦式模型: 訊息正則化的監督解耦式學習(Decoupled Supervised Learning with Information Regularization, DeInfoReg),透過解耦式模型來截斷各模塊的梯度,使各模塊的梯度互不影響。

    本論文透過設計新的局部損失函數和模型結構設計,來增強模型性能與靈活性。新的模型結構設計使模型擁有自適應推理輸出與動態擴增層與新特徵的新特性。新的局部損失函數透過三種正則化方法來衡量模型輸出嵌入之間的訊息,這些方法包括:使輸出嵌入與真實標籤保持不變性、使批次內的輸出嵌入保持差異性、計算輸出嵌入的共變異性來減少自身的冗餘資訊。這些正則化方法使模型能夠更好地捕捉數據的細微特徵,提高模型的辨識性能。

    在後續的實驗中,詳細評估了DeInfoReg 模型的性能,證實了其在多種任務和數據集上的優越表現。實驗結果顯示,DeInfoReg 在處理深層結構下的梯度問題方面具有顯著優勢,並且在不同雜訊標籤比例下的抗噪能力也優於傳統BP模型。此外,我們還探討了模型在自適應推理輸出和動態擴增層與新特徵的應用潛力,並提出了未來改進的方向,以進一步提升模型的實用性和泛化能力。這些結果表明DeInfoReg在深度學習領域具有廣泛的應用前景和強大的拓展能力。;Increasing the number of layers to enhance model capabilities has become a clear trend in designing deep neural networks. However, this approach faces various optimization issues, such as vanishing or exploding gradients. We propose a new model, Decoupled Supervised Learning with Information Regularization (DeInfoReg), that decouples the gradients of
    different blocks to ensure that the gradients of different blocks do not interfere.

    DeInfoReg enhances model performance and flexibility by designing new Local Loss and model structures. The new model structure endows the model with an Adaptive Inference Path, Dynamic Expanded Layers, and Dynamic Extended Layers with new features. The Local Loss function measures the information between model output embedding through three regularization methods. Those methods include: ensuring the invariance of the output embedding with true labels, maintaining the variance of output embedding within batch size, and using the covariance to reduce redundancy in the output embedding. This method enables the model to capture features in the data better, thus improving performance.

    We evaluate the performance of DeInfoReg through various tasks and datasets. The experimental results demonstrate that DeInfoReg signif cantly addresses gradient issues in deep neural networks and shows superior noise resistance under different proportions of label noise compared to traditional backpropagation. Additionally, we explore the potential applications of the model in Adaptive Inference Paths and Dynamically Expanded Layers with new features. The findings indicate that DeInfoReg has broad application prospects and robust expansion capabilities in deep neural networks. Finally, we discuss future improvements to enhance the model’s practicality and generalization capabilities.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML15检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明