擁有大量且高品質的資料集是訓練深度神經網絡(DNNs)的關鍵,然而現實中收集到的資料集往往缺乏正確且良好的標註。為了解決標註不確定性的問題,研究人員開始關注不可靠偏標籤學習(Unreliable Partial Label Learning,UPLL),這比傳統的偏標籤學習(Partial Label Learning,PLL)更符合現實情況。 本論文提出了一種新的生成UPLL資料集方法,名為候選標籤推斷生成(Candidate Label Inference Generation, CLIG),利用完整的資料集訓練模型及自行收集資料集的統計結果,生成符合現實標註傾向的候選標籤集。實驗證明,CLIG比過去的方法更貼近現實且更合理。 此外本論文還提出了兩個UPLL框架:特徵對齊偽目標學習(Feature Alignment Pseudo-Target Learning, FAPT)和特徵對齊暫時擴增標籤集(Feature Alignment Temporarily Expanded Labels Set, FATEL)。這兩個框架利用對比學習優化模型提取特徵,並利用監督學習進行分類。FAPT將候選標籤集轉換成偽目標,並在每個週期(epoch)結束前更新;FATEL則在每個週期結束前暫時增加一個可能為真實標籤的選項至候選標籤集中。實驗結果表明,FAPT及FATEL在多個圖像資料集上的表現優於當前UPLL的最先進方法。;Large and high-quality datasets are crucial for training Deep Neural Networks (DNNs). However, datasets collected in reality could be inaccurate and noisy. To address the issue of label uncertainty, researchers have turned their attention to Unreliable Partial Label Learning (UPLL), which is more realistic than traditional Partial Label Learning (PLL). There is currently a lack of publicly available UPLL datasets, so previous research usually requires the artificial synthesis of UPLL datasets. This paper proposes a novel method for generating UPLL datasets, called Candidate Label Inference Generation (CLIG), which leverages the training of models on complete datasets and the statistical results from self-collected datasets to generate candidate label sets that align with real-world labeling tendencies. Experimental results demonstrate that CLIG is more realistic and reasonable than previous methods. Additionally, this paper introduces two UPLL frameworks: Feature Alignment Pseudo-Target Learning (FAPT) and Feature Alignment Temporarily Expanded Labels Set (FATEL). These frameworks utilize contrastive learning to optimize feature extraction and employ supervised learning for classification. FAPT transforms candidate label sets into pseudo-targets and updates them at the end of each epoch, while FATEL temporarily adds a potentially true label option to the candidate label set before each epoch ends. Experimental results show that FAPT and FATEL outperform state-of-the-art methods for UPLL on multiple image datasets.