中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/98367
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 83776/83776 (100%)
造訪人次 : 59568581      線上人數 : 738
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/98367


    題名: 改進電腦斷層掃描醫療影像多任務多類別分類問題:使用自監督式學習與監督式對比學習;Leveraging Self-Supervised Learning and Supervised Contrastive Learning in Enhancing Multi-Task Multi-Class Classification Problem for CT Scan Medical Images
    作者: 程謀謙;Cheng, Mo-Qian
    貢獻者: 資訊工程學系
    關鍵詞: 醫療影像;多類別;多輸出;分類;對比學習;自監督式學習;模型強健性;Medical Imaging;Multi-Class;Multi-Output;Classification;Contrastive Learning;Self-Supervised Learning;Model Robustness
    日期: 2025-07-28
    上傳時間: 2025-10-17 12:41:30 (UTC+8)
    出版者: 國立中央大學
    摘要: 監督式學習已經被充分實證在標註資料充足的前提下能夠取得非凡的表現,但是在醫療影像分類任務中容易因為缺乏精準且充足的標註而讓模型產生過擬合的問題,進而阻礙其在實務的應用。

    面對這個問題,本論文旨在研究多任務學習(Multi-Task Learning)架構與自監督式學習(Self-Supervised Learning, SSL)預訓練的整合對減緩過擬合問題的效果。整體的發想係藉由SSL預訓練鼓勵特徵提取器捕捉醫療影像中微弱卻關鍵的特徵資訊,以此在下游任務的微調中,減輕過擬合的現象同時提升表現。為了實測模型的表現,我們將經過不同方法預訓練的特徵提取器在下游多類別多輸出(Multi-Class Multi-Output)的腹部創傷偵測任務上進行監督式學習。

    我們的實驗結果表明了SSL、SCL能夠大幅減輕在監督式學習中嚴重的過擬合現象,甚至能夠進一步提升模型在許多指標上的表現。透過進一步的分析,透過參與預訓練特徵提取器的不同,我們發現影像特徵提取器是預訓練帶來整體效果進步的主因。最後,我們針對SCL方法額外進行替換特徵提取器骨幹模型的實驗,探索到監督式對比學習方法增強模型強健性(Robustness)的潛力,提供突破模型強健性瓶頸的見解。

    總的來說,我們的研究表明SSL預訓練能夠在複雜的醫療影像分類任務中為模型提升分類效果以及強健性。;Supervised learning has been proven to achieve magnificent performance with abundant labeled data, but often encounter overfitting problem in medical image classification due to the scarcity and the inaccuracy of the labeled data, hindering its development on real-life application.

    To address the problem, this thesis aims to investigate the integration of multi-task learning scheme and self-supervised learning (SSL) pretraining. The core idea is to leverage SSL pretraining by encouraging the model to capture more subtle yet crucial features within medical images, thereby mitigate the affect of overfitting while improve the performance during the fine-tuning in the downstream task. Specifically, we pretrain the model by a wide variety of SSL method, and evaluate them on a multi-class multi-output abdominal trauma detection task.

    Our experiment results demonstrate the fact that SSL and SCL pretraining could alleviate the overfitting problem occurs commonly in supervised learning. Moreover, it also yields modest improvement across several metrics. Further analysis, through comparing the experiments results with different feature extractor components, we reveal the image feature extractor is the major contributor to these gains. Lastly, we discover the potential of SCL to reinforce the robustness of a model by switch the backbone model of the feature extractor, providing insights into breaking the model robustness bottleneck.

    In conclusion, our research suggest that SSL pretraining could improve the classification performance and the robustness of a model for complicate medical image classification task.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML10檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明