過去十年來,自監督式影像辨識的發展顯著影響了電腦視覺領域。然而,現有的自監督式對比學習方法在影像分類與辨識中仍面臨挑戰,包括類別內部分散、類別之間重疊,以及聚類不穩定性。本論文提出基於動態質心之對比式學習(DCB-CL)與知識蒸餾方法,這是一種整合對比學習、聚類的機制與特徵對齊的新方法,以提升特徵表徵。我們的方法首先引入動態質心的更新機制與異常值得消除策略,以減少聚類的不穩定性,同時優化類別內的特徵分佈,提升類別內的一致性與類別之間的區別性。此外,我們採用教師-學生知識蒸餾架構,透過教師網路將高階語義資訊傳遞給學生網路,以確保特徵表徵的穩定性與對齊性。透過廣泛的實驗,我們證明 DCB-CL 在多個真實世界的資料集上優於最先進的基線,顯著提升表徵學習品質,強化特徵一致性並降低類間混淆,進一步驗證其在真實世界圖像識別中的強泛化能力。;Over the past decade, advancements in self-supervised image recognition have significantly influenced the field of computer vision. However, existing self-supervised contrastive learning methods still face challenges in image classification and recognition, including intraclass dispersion, inter-class overlap, and clustering instability. This paper introduces Dynamic Centroid-Based Contrastive Learning (DCB-CL) with Knowledge Distillation, a novel approach that integrates contrastive learning, clustering mechanisms, and feature alignment to enhance feature representation. Our method first incorporates a dynamic centroid updating mechanism and an Outlier Elimination Strategy to mitigate clustering instability, while further refining class-wise feature distributions to improve intra-class consistency and inter-class separability. Additionally, we employ a teacher-student knowledge distillation framework, where the teacher network transfers high-level semantic information to the student, ensuring stable and well-aligned feature representations. Through extensive experiments, we demonstrate that DCB-CL outperforms state-of-the-art baselines on multiple real-world datasets, significantly enhancing representation learning quality, strengthening feature consistency, and reducing inter-class confusion, further validating its strong generalization capability in real-world image recognition.