由於影像特徵學習廣泛的應用,其已成為電腦視覺領域的關鍵技術。尤其是對比學習(CL),已被證明能夠有效增強特徵提取,從而學習到更細節的表徵。先前的相關研究主要關注監督和自監督對比學習中不同的資料建構。然而,不同的監督訊號提供不同程度的語義訊息,這可能會直接影響模型訓練。這種限制也決定了特徵提取的程度,導致性能欠佳。本研究提出了一種新穎的混合監督對比學習(簡稱Hybrid-SCL)框架,將監督和自監督技術結合用於CL模型建構。此外,還引入了兩種最佳化策略—質心對齊和動態權重學習,以增強Hybrid-SCL的效能。此框架可以有效融合監督和自監督資訊的優勢,進一步提升生成的嵌入表徵的品質。最後,在真實資料集上進行的大量實驗證明了 Hybrid-SCL 在各項評估指標上均優於最先進的方法。實驗結果表明,Hybrid-SCL 在準確率、精確率和召回率指標上始終優於所有基準模型。我們也透過案例研究展示了所提框架的適用性。;Due to the widespread applicability, image representation learning becomes a crucial technique in computer vision domain. Especially, contrastive learning (CL) has been proven to successfully enhance the feature extraction for learning qualified representation. Prior related studies mainly focus on different data constructions in super-vised and self-supervised contrastive learning. However, different supervision signals provide different levels of semantic information, which may directly affect the model training. This limitation also dominates the degree of extracted feature, leading to suboptimal performance. In this study, we propose a novel Hybrid Supervised Contrastive Learning (abbreviated as Hybrid-SCL) framework, to integrate the supervised and self-supervised techniques together for CL model construction. In addition, two optimization strategies, centroid-orientated alignment and dynamic weight learning, are introduced to enhance the performance of Hybrid-SCL. The proposed framework could effectively merge the benefits from both supervised and self-supervised information to further improve the quality of derived embedding representation. Finally, extensive experiments conducted on real datasets demonstrate the superiority of Hybrid-SCL over the state-of-the-art methods in terms various evaluation metrics. The experimental results show that Hybrid-SCL consistently outperforms all baseline models across accuracy, precision and recall metrics. We also use a case study to show the applicability of the proposed framework.