中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/98290
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 83696/83696 (100%)
Visitors : 56649825      Online Users : 5614
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: https://ir.lib.ncu.edu.tw/handle/987654321/98290


    Title: Hybrid-SCL:具自監督質心對齊的混合監督對比學習;Hybrid-SCL: A Hybrid Supervised Contrastive Learning with Self-Supervised Centroid Alignment
    Authors: 郭柏成;Kuo, Po-Cheng
    Contributors: 資訊管理學系
    Keywords: 圖像分類;自監督學習;監督式學習;對比學習;質心對齊;Image Classification;Supervised Learning;Self-Supervised Learning;Contrastive Learning;Centroid Alignment
    Date: 2025-07-18
    Issue Date: 2025-10-17 12:35:18 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 由於影像特徵學習廣泛的應用,其已成為電腦視覺領域的關鍵技術。尤其是對比學習(CL),已被證明能夠有效增強特徵提取,從而學習到更細節的表徵。先前的相關研究主要關注監督和自監督對比學習中不同的資料建構。然而,不同的監督訊號提供不同程度的語義訊息,這可能會直接影響模型訓練。這種限制也決定了特徵提取的程度,導致性能欠佳。本研究提出了一種新穎的混合監督對比學習(簡稱Hybrid-SCL)框架,將監督和自監督技術結合用於CL模型建構。此外,還引入了兩種最佳化策略—質心對齊和動態權重學習,以增強Hybrid-SCL的效能。此框架可以有效融合監督和自監督資訊的優勢,進一步提升生成的嵌入表徵的品質。最後,在真實資料集上進行的大量實驗證明了 Hybrid-SCL 在各項評估指標上均優於最先進的方法。實驗結果表明,Hybrid-SCL 在準確率、精確率和召回率指標上始終優於所有基準模型。我們也透過案例研究展示了所提框架的適用性。;Due to the widespread applicability, image representation learning becomes a crucial technique in computer vision domain. Especially, contrastive learning (CL) has been proven to successfully enhance the feature extraction for learning qualified representation. Prior related studies mainly focus on different data constructions in super-vised and self-supervised contrastive learning. However, different supervision signals provide different levels of semantic information, which may directly affect the model training. This limitation also dominates the degree of extracted feature, leading to suboptimal performance. In this study, we propose a novel Hybrid Supervised Contrastive Learning (abbreviated as Hybrid-SCL) framework, to integrate the supervised and self-supervised techniques together for CL model construction. In addition, two optimization strategies, centroid-orientated alignment and dynamic weight learning, are introduced to enhance the performance of Hybrid-SCL. The proposed framework could effectively merge the benefits from both supervised and self-supervised information to further improve the quality of derived embedding representation. Finally, extensive experiments conducted on real datasets demonstrate the superiority of Hybrid-SCL over the state-of-the-art methods in terms various evaluation metrics. The experimental results show that Hybrid-SCL consistently outperforms all baseline models across accuracy, precision and recall metrics. We also use a case study to show the applicability of the proposed framework.
    Appears in Collections:[Graduate Institute of Information Management] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML3View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明