English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41142303      線上人數 : 355
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/95487


    題名: 不可靠偏標籤學習:新的資料集生成方法和新的解決框架;Unreliable Partial Label Learning: Novel Dataset Generation Method and Solution Frameworks
    作者: 黃怡庭;Huang, Yi-Ting
    貢獻者: 資訊工程學系
    關鍵詞: 不可靠偏標籤學習;噪音偏標籤學習;對比學習;弱監督學習;分類;真實世界標籤噪音
    日期: 2024-07-16
    上傳時間: 2024-10-09 16:53:54 (UTC+8)
    出版者: 國立中央大學
    摘要: 擁有大量且高品質的資料集是訓練深度神經網絡(DNNs)的關鍵,然而現實中收集到的資料集往往缺乏正確且良好的標註。為了解決標註不確定性的問題,研究人員開始關注不可靠偏標籤學習(Unreliable Partial Label Learning,UPLL),這比傳統的偏標籤學習(Partial Label Learning,PLL)更符合現實情況。
    本論文提出了一種新的生成UPLL資料集方法,名為候選標籤推斷生成(Candidate Label Inference Generation, CLIG),利用完整的資料集訓練模型及自行收集資料集的統計結果,生成符合現實標註傾向的候選標籤集。實驗證明,CLIG比過去的方法更貼近現實且更合理。
    此外本論文還提出了兩個UPLL框架:特徵對齊偽目標學習(Feature Alignment Pseudo-Target Learning, FAPT)和特徵對齊暫時擴增標籤集(Feature Alignment Temporarily Expanded Labels Set, FATEL)。這兩個框架利用對比學習優化模型提取特徵,並利用監督學習進行分類。FAPT將候選標籤集轉換成偽目標,並在每個週期(epoch)結束前更新;FATEL則在每個週期結束前暫時增加一個可能為真實標籤的選項至候選標籤集中。實驗結果表明,FAPT及FATEL在多個圖像資料集上的表現優於當前UPLL的最先進方法。;Large and high-quality datasets are crucial for training Deep Neural Networks (DNNs). However, datasets collected in reality could be inaccurate and noisy. To address the issue of label uncertainty, researchers have turned their attention to Unreliable Partial Label Learning (UPLL), which is more realistic than traditional Partial Label Learning (PLL).
    There is currently a lack of publicly available UPLL datasets, so previous research usually requires the artificial synthesis of UPLL datasets. This paper proposes a novel method for generating UPLL datasets, called Candidate Label Inference Generation (CLIG), which leverages the training of models on complete datasets and the statistical results from self-collected datasets to generate candidate label sets that align with real-world labeling tendencies. Experimental results demonstrate that CLIG is more realistic and reasonable than previous methods.
    Additionally, this paper introduces two UPLL frameworks: Feature Alignment Pseudo-Target Learning (FAPT) and Feature Alignment Temporarily Expanded Labels Set (FATEL). These frameworks utilize contrastive learning to optimize feature extraction and employ supervised learning for classification. FAPT transforms candidate label sets into pseudo-targets and updates them at the end of each epoch, while FATEL temporarily adds a potentially true label option to the candidate label set before each epoch ends. Experimental results show that FAPT and FATEL outperform state-of-the-art methods for UPLL on multiple image datasets.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML26檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明