博碩士論文 111522056 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:133 、訪客IP:18.119.159.196
姓名 黃怡庭(Yi-Ting Huang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 不可靠偏標籤學習:新的資料集生成方法和新的解決框架
(Unreliable Partial Label Learning: Novel Dataset Generation Method and Solution Frameworks)
相關論文
★ 透過網頁瀏覽紀錄預測使用者之個人資訊與性格特質★ 透過矩陣分解之多目標預測方法預測使用者於特殊節日前之瀏覽行為變化
★ 動態多模型融合分析研究★ 擴展點擊流:分析點擊流中缺少的使用者行為
★ 關聯式學習:利用自動編碼器與目標傳遞法分解端到端倒傳遞演算法★ 融合多模型排序之點擊預測模型
★ 分析網路日誌中有意圖、無意圖及缺失之使用者行為★ 基於自注意力機制產生的無方向性序列編碼器使用同義詞與反義詞資訊調整詞向量
★ 探索深度學習或簡易學習模型在點擊率預測任務中的使用時機★ 空氣品質感測器之故障偵測--基於深度時空圖模型的異常偵測框架
★ 以同反義詞典調整的詞向量對下游自然語言任務影響之實證研究★ 結合時空資料的半監督模型並應用於PM2.5空污感測器的異常偵測
★ 藉由權重之梯度大小調整DropConnect的捨棄機率來訓練神經網路★ 使用圖神經網路偵測 PTT 的低活躍異常帳號
★ 針對個別使用者從其少量趨勢線樣本生成個人化趨勢線★ 基於雙變量及多變量貝他分布的兩個新型機率分群模型
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 擁有大量且高品質的資料集是訓練深度神經網絡(DNNs)的關鍵,然而現實中收集到的資料集往往缺乏正確且良好的標註。為了解決標註不確定性的問題,研究人員開始關注不可靠偏標籤學習(Unreliable Partial Label Learning,UPLL),這比傳統的偏標籤學習(Partial Label Learning,PLL)更符合現實情況。
本論文提出了一種新的生成UPLL資料集方法,名為候選標籤推斷生成(Candidate Label Inference Generation, CLIG),利用完整的資料集訓練模型及自行收集資料集的統計結果,生成符合現實標註傾向的候選標籤集。實驗證明,CLIG比過去的方法更貼近現實且更合理。
此外本論文還提出了兩個UPLL框架:特徵對齊偽目標學習(Feature Alignment Pseudo-Target Learning, FAPT)和特徵對齊暫時擴增標籤集(Feature Alignment Temporarily Expanded Labels Set, FATEL)。這兩個框架利用對比學習優化模型提取特徵,並利用監督學習進行分類。FAPT將候選標籤集轉換成偽目標,並在每個週期(epoch)結束前更新;FATEL則在每個週期結束前暫時增加一個可能為真實標籤的選項至候選標籤集中。實驗結果表明,FAPT及FATEL在多個圖像資料集上的表現優於當前UPLL的最先進方法。
摘要(英) Large and high-quality datasets are crucial for training Deep Neural Networks (DNNs). However, datasets collected in reality could be inaccurate and noisy. To address the issue of label uncertainty, researchers have turned their attention to Unreliable Partial Label Learning (UPLL), which is more realistic than traditional Partial Label Learning (PLL).
There is currently a lack of publicly available UPLL datasets, so previous research usually requires the artificial synthesis of UPLL datasets. This paper proposes a novel method for generating UPLL datasets, called Candidate Label Inference Generation (CLIG), which leverages the training of models on complete datasets and the statistical results from self-collected datasets to generate candidate label sets that align with real-world labeling tendencies. Experimental results demonstrate that CLIG is more realistic and reasonable than previous methods.
Additionally, this paper introduces two UPLL frameworks: Feature Alignment Pseudo-Target Learning (FAPT) and Feature Alignment Temporarily Expanded Labels Set (FATEL). These frameworks utilize contrastive learning to optimize feature extraction and employ supervised learning for classification. FAPT transforms candidate label sets into pseudo-targets and updates them at the end of each epoch, while FATEL temporarily adds a potentially true label option to the candidate label set before each epoch ends. Experimental results show that FAPT and FATEL outperform state-of-the-art methods for UPLL on multiple image datasets.
關鍵字(中) ★ 不可靠偏標籤學習
★ 噪音偏標籤學習
★ 對比學習
★ 弱監督學習
★ 分類
★ 真實世界標籤噪音
關鍵字(英)
論文目次 摘要 v
Abstract vii
誌謝 ix
目錄 x
使用符號與定義 xiv
一、 緒論 1
二、 相關研究 5
2.1 傳統 PLL.................................................................. 5
2.2 深度學習 PLL............................................................ 5
2.3 UPLL ...................................................................... 6
三、 資料集收集與生成 7
3.1 UPLL 的假設與目標 ................................................... 7
3.2 盤點相關資料集與 UPLL 候選標籤集生成方式 ................. 8
3.3 收集真人實際標註之 UPLL 資料集 ................................ 9
3.4 由標準資料集訓練之模型生成 UPLL 候選標籤集 .............. 11
3.4.1 δ1 與 δ2 的決定方式 ........................................... 11
3.4.2 候選標籤推斷生成 (Candidate Label Inference Generation, CLIG)........................................................... 12
3.5 候選標籤集生成方式比較 ............................................. 14
四、 研究模型及方法 17
4.1 整體框架 .................................................................. 17
4.2 特徵對齊偽目標學習 (Feature Alignment Pseudo-Target
Learning, FAPT)............................................................... 18
4.3 特徵對齊暫時擴增標籤集 (Feature Alignment Temporarily
Expanded Labels Set, FATEL).............................................. 22
五、 實驗結果與分析 25
5.1 實驗設置與實作細節 ................................................... 25
5.1.1 資料集 ............................................................ 25
5.1.2 baseline ........................................................... 25
5.1.3 實驗細節 ......................................................... 27
5.2 FAPT、FATEL 與 Baseline 在現實低噪音 UPLL 資料集
上的性能比較 ................................................................... 28
5.3 減少資料量實驗 ......................................................... 29
5.4 FAPT、FATEL 與 Baseline 在高噪音 UPLL 資料集上的
性能比較 ......................................................................... 30
5.5 超參數分析 ............................................................... 31
5.6 消融實驗 .................................................................. 33
六、 總結 34
6.1 結論 ........................................................................ 34
6.2 未來展望 .................................................................. 35
參考文獻 36
附錄 A 程式碼 39
參考文獻 [1] J. Luo and F. Orabona, “Learning from candidate labeling sets,” Advances in neural information processing systems, vol. 23, 2010.
[2] L. Liu and T. Dietterich, “A conditional multinomial mixture model for superset label learning,” Advances in neural information processing systems, vol. 25, 2012.
[3] C.-H. Chen, V. M. Patel, and R. Chellappa, “Learning from ambiguously labeled face images,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 7, pp. 1653–1667, 2017.
[4] Z. Zeng, S. Xiao, K. Jia, et al., “Learning by associating ambiguously labeled im- ages,” in Proceedings of the IEEE Conference on computer vision and pattern recognition, 2013, pp. 708–715.
[5] H. Wang, R. Xiao, Y. Li, et al., “Pico+: Contrastive label disambiguation for ropartial bust label learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
[6] Y. Shi, N. Xu, H. Yuan, and X. Geng, “Unreliable partial label learning with recursive separation,” arXiv preprint arXiv:2302.09891, 2023.
[7] Z. Lian, M. Xu, L. Chen, L. Sun, B. Liu, and J. Tao, “Irnet: Iterative refinement network for noisy partial label learning,” arXiv preprint arXiv:2211.04774, 2022.
[8] C. Qiao, N. Xu, J. Lv, Y. Ren, and X. Geng, “Fredis: A fusion framework of refinement and disambiguation for unreliable partial label learning,” in International Conference on Machine Learning, PMLR, 2023, pp. 28 321–28 336.
[9] M. Xu, Z. Lian, L. Feng, B. Liu, and J. Tao, “Alim: Adjusting label importance mechanism for noisy partial label learning,” Advances in Neural Information Processing Systems, vol. 36, 2024.
[10] E. Hüllermeier and J. Beringer, “Learning from ambiguously labeled examples,” Intelligent Data Analysis, vol. 10, no. 5, pp. 419–439, 2006.
[11] T. Cour, B. Sapp, and B. Taskar, “Learning from partial labels,” The Journal of Machine Learning Research, vol. 12, pp. 1501–1536, 2011.
[12] M.-L. Zhang and F. Yu, “Solving the partial label learning problem: An instance-based approach.,” in IJCAI, 2015, pp. 4048–4054.
[13] P. Ni, S.-Y. Zhao, Z.-G. Dai, H. Chen, and C.-P. Li, “Partial label learning via conditional-label-aware disambiguation,” Journal of Computer Science and Technology, vol. 36, no. 3, pp. 590–605, 2021.
[14] R. Jin and Z. Ghahramani, “Learning with multiple labels,” Advances in neural information processing systems, vol. 15, 2002.
[15] N. Nguyen and R. Caruana, “Classification with partial labels,” in Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, 2008, pp. 551–559.
[16] Y.-C. Chen, V. M. Patel, J. K. Pillai, R. Chellappa, and P. J. Phillips, “Dictionary learning from ambiguously labeled data,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 353–360.
[17] F. Yu and M.-L. Zhang, “Maximum margin partial label learning,” in Asian conference on machine learning, PMLR, 2016, pp. 96–111.
[18] Y. Yao, J. Deng, X. Chen, C. Gong, J. Wu, and J. Yang, “Deep discriminative cnn with temporal ensembling for ambiguously-labeled image classification,” in Proceedings of the aaai conference on artificial intelligence, vol. 34, 2020, pp. 12 669– 12 676.
[19] Y. Yao, C. Gong, J. Deng, and J. Yang, “Network cooperation with progressive disambiguation for partial label learning,” in Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2020, Ghent, Belgium, September 14–18, 2020, Proceedings, Part II, Springer, 2021, pp. 471–488.
[20] J. Lv, M. Xu, L. Feng, G. Niu, X. Geng, and M. Sugiyama, “Progressive identification of true labels for partial-label learning,” in International conference on machine learning, PMLR, 2020, pp. 6500–6510.
[21] H. Wen, J. Cui, H. Hang, J. Liu, Y. Wang, and Z. Lin, “Leveraged weighted loss for partial label learning,” in International conference on machine learning, PMLR, 2021, pp. 11 091–11 100.
[22] D.-D. Wu, D.-B. Wang, and M.-L. Zhang, “Revisiting consistency regularization for deep partial label learning,” in International conference on machine learning, PMLR, 2022, pp. 24 212–24 225.
[23] H. Wang, R. Xiao, Y. Li, et al., “Pico: Contrastive label disambiguation for partial label learning,” in International Conference on Learning Representations, 2021.
[24] S. Xia, J. Lv, N. Xu, and X. Geng, “Ambiguity-induced contrastive learning for instance-dependent partial label learning.,” in IJCAI, 2022, pp. 3615–3621.
[25] Y. Yan and Y. Guo, “Mutual partial label learning with competitive label noise,” in The Eleventh International Conference on Learning Representations, 2022.
[26] S. Xia, J. Lv, N. Xu, G. Niu, and X. Geng, “Towards effective visual representations for partial-label learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 15 589–15 598.
[27] S. Tian, H. Wei, Y. Wang, and L. Feng, “Crosel: Cross selection of confident pseudo labels for partial-label learning,” arXiv preprint arXiv:2303.10365, 2023.
[28] J. Lv, B. Liu, L. Feng, et al., “On the robustness of average losses for partial-label learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
[29] M.-L. Zhang, Partial label learning datasets, https://palm.seu.edu.cn/zhangml/, Accessed: 2024-05-05.
[30] J. Wei, Z. Zhu, H. Cheng, T. Liu, G. Niu, and Y. Liu, “Learning with noisy labels revisited: A study using real-world human annotations,” in International Conference on Learning Representations, 2022. [Online]. Available: https://openreview. net/forum?id=TBWA6PLJZQm.
[31] H. Song, M. Kim, and J.-G. Lee, “SELFIE: Refurbishing unclean samples for robust deep learning,” in ICML, 2019.
[32] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “Mixup: Beyond empirical risk minimization,” arXiv preprint arXiv:1710.09412, 2017.
指導教授 陳弘軒 審核日期 2024-7-16
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明