中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/98363
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 83776/83776 (100%)
造訪人次 : 59568581      線上人數 : 738
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/98363


    題名: 改進腹部電腦斷層掃描多任務多類別問題: 使用 2D VoCo 預訓練;Leveraging 2D VoCo-Based Pretraining to Enhance Multi-Task Multi-Class Classification of Abdominal CT Scan Medical Images
    作者: 邱柏愷;Chiu, Po-Kai
    貢獻者: 資訊工程學系
    關鍵詞: 自監督學習;對比學習;腹部CT;醫學影像;Self-supervised Learning;Contrastive Learning;Abdominal CT;Medical Imaging
    日期: 2025-07-28
    上傳時間: 2025-10-17 12:41:06 (UTC+8)
    出版者: 國立中央大學
    摘要: 在醫學影像分析領域中,深度學習模型的效能高度依賴大量且高
    品質的標註資料,然而醫學標註往往面臨高成本與高專業門檻的挑戰。
    為降低對人工標註的依賴,本研究提出一套應用於2D 醫學影像之自
    監督式對比學習架構,改良自3D VoCo(Volume Contrastive Learning
    Framework)而來,並整合序列建模技術,提升模型於腹部創傷分類任務
    中的辨識能力。
    本研究探討改良自3D VoCo 的2D 自監督對比學習方法於腹部CT
    影像分類任務中的應用。透過在公開腹部資料集進行切片級對比預訓練,
    學習切片間的語意結構,再將主幹架構遷移至RSNA 2023 資料集,執行
    多器官與單器官損傷分類。下游模型結合CNN-LSTM 架構以捕捉切片
    序列關聯,並透過多組消融實驗驗證對比策略之效益。
    實驗結果顯示,在多器官分類任務中也有不錯的成效,在資料標註有
    限的醫學場景下,能有效捕捉空間語意關聯並提升分類性能,證實VoCo
    框架在腹部CT 分析領域具備實務可行性與應用潛力,並提出了未來改
    進的方向,以進一步提升模型的實用性和泛化能力。這些結果表明2D
    VoCo 方法在醫療影像結合深度學習領域具有廣泛的應用前景和強大的
    擴展能力。;In the field of medical image analysis, the performance of deep learning
    models heavily depends on large-scale, high-quality annotated datasets.
    However, medical annotations often face high costs and require specialized
    expertise. To reduce reliance on manual labeling, this study proposes a selfsupervised
    contrastive learning framework tailored for 2D medical imaging,
    adapted from the 3D Volume Contrastive Learning Framework (VoCo),
    and integrates sequence modeling techniques to enhance performance in
    abdominal trauma classification tasks.
    This study explores the application of the improved 2D VoCo method
    on abdominal CT image classification. By conducting slice-level contrastive
    pretraining on publicly available abdominal datasets, the model learns semantic
    structures across slices and transfers the pretrained backbone to
    the RSNA 2023 dataset for downstream multi-organ and single-organ injury
    classification tasks. The downstream model adopts a CNN-LSTM
    architecture to capture spatial-temporal correlations across slices, and a
    series of ablation studies are conducted to validate the effectiveness of the
    proposed contrastive strategy.
    Experimental results show that the proposed approach achieves promising
    performance even in multi-organ classification settings. Under limited
    annotation scenarios, the method effectively captures spatial-semantic dependencies
    and improves classification accuracy. These findings demonstrate
    the practical feasibility and application potential of the VoCo framework
    for abdominal CT analysis, and suggest directions for further improvement
    to enhance model generalizability and utility. Overall, the 2D
    VoCo method exhibits strong potential and scalability for medical image
    analysis in combination with deep learning.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML12檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明