中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93523
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 80990/80990 (100%)
造访人次 : 42409345      在线人数 : 889
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93523


    题名: 深度學習基礎模型與自監督學習;Deep Learning Foundation Model with Self-Supervised Learning
    作者: 陳文研;Nhiem, Tran Van
    贡献者: 資訊工程學系
    关键词: 自監督學習;計算機視覺;視覺表徵學習;深度神經網絡;圖像分析;特徵學習;Self-Supervised Learning;Deep Learning Foundation Model;Computer Vision Foundation Model;Visual Representation learning;Deep Neural Network;Image Processing
    日期: 2024-01-16
    上传时间: 2024-09-19 17:10:17 (UTC+8)
    出版者: 國立中央大學
    摘要: 最近在自監督式學習的發展讓我發現其取代傳統監督式學習的可能性,尤其是自監督式學習解決了傳統監督式學習的需要大量標記資料及對不同任務泛化性不高的問題。自監督式學習使用容易獲得的未標記數據對深度神經網絡進行預訓練,然後在下游任務上進行微調,相比於監督式學習需要更少的標記資料。值得注意的是,自監督學習在包括文本、視覺、 語音等多個領域均展現出成功。
    在本簡報中,我們提出了數種新穎的自監督式學習方法,用於視覺表徵學習,可以提高多個計算機視覺下游任務的效果。這些方法目標是利用輸入數據本身來生成學習目標。我們的第一種方法HAPiCLR利用影像的上下文表徵中的像素級信息,並結合對比式學習目標, 使其能夠為下游任務學習更有效的圖像表徵。第二種方法HARL引入了一種基於啟發式注意力的方法,最大化向量空間中抽象對象級嵌入,從而產生更高質量的語義表徵。最後,MVMA框架結合了多個資料擴增的輸入,利用每個訓練樣本的全局和局部信息, MVMA框架可以探索廣泛的圖像外觀,這種方法產生的表徵具有對於不同尺度的影像有很高的魯棒性,使其對下游任務有更高的泛化性及提高訓練的效率。
    這些方法顯著改善了圖像分類、物件偵測和語義分割等任務的性能。它們展示了自監督式學習提取圖像特徵的能力,從而提高了在各種計算機視覺任務中的深度神經網絡效果及效率。本論文不僅介紹了新的學習算法,還提供了對自監督表徵的全面分析,揭示了不同模型之間的區別因素。總的來說,它展示了一套創新、高效、泛化性高的自監督學習在方法,使自監督式模型更好的泛化到下游任務的能力。
    ;Recent advances in self-supervised learning have shown promise as an alternative to supervised learning, particularly for addressing its critical shortcomings: the need for abundant labeled data and the inability to leverage prior knowledge and skills. Self-supervised learning involves pre-training deep neural networks on pretext tasks using easily acquirable, unlabeled data and then fine-tuning it on downstream tasks of interest, requiring fewer labeled data than supervised learning. Notably, self-supervised learning has demonstrated success in diverse domains, including text, vision, speech, etc.
    In this thesis, we present several novel self-supervised learning methods for visual representation learning that can improve the performance of multiple computer vision downstream tasks. These methods are designed to leverage the input data itself for generating learning targets. Our first method, HAPiCLR, leverages pixel-level information from an object′s contextual representation with a contrastive learning objective, allowing it to learn more robust and efficient image representations for downstream tasks. The second method, HARL, introduces a heuristic attention-based approach that maximizes the abstract object-level embedding in vector space, resulting in higher quality semantic representations. Finally, the MVMA framework combines multiple augmentation pipelines and leveraging both global and local information from each training sample, the MVMA framework can explore a vast range of image appearances. This approach results in representations that are not only scale-invariant but also invariant to nuisance-factors, making them more robust and efficient for downstream tasks.
    These methods have notably improved performance in tasks like image classification, object detection, and semantic segmentation. They demonstrate the ability of self-supervised algorithms to transform high-level image properties, thereby enhancing deep neural network efficiency in various computer vision tasks. This thesis not only introduces new learning algorithms but also provides a comprehensive analysis of self-supervised representations and the distinct factors that differentiate various models. Overall, it presents a suite of innovative, adaptable, and efficient approaches to self-supervised learning in image representation, significantly boosting the robustness and effectiveness of learned features.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML28检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明