中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/98363
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 83776/83776 (100%)
Visitors : 59566573      Online Users : 915
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: https://ir.lib.ncu.edu.tw/handle/987654321/98363


    Title: 改進腹部電腦斷層掃描多任務多類別問題: 使用 2D VoCo 預訓練;Leveraging 2D VoCo-Based Pretraining to Enhance Multi-Task Multi-Class Classification of Abdominal CT Scan Medical Images
    Authors: 邱柏愷;Chiu, Po-Kai
    Contributors: 資訊工程學系
    Keywords: 自監督學習;對比學習;腹部CT;醫學影像;Self-supervised Learning;Contrastive Learning;Abdominal CT;Medical Imaging
    Date: 2025-07-28
    Issue Date: 2025-10-17 12:41:06 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 在醫學影像分析領域中,深度學習模型的效能高度依賴大量且高
    品質的標註資料,然而醫學標註往往面臨高成本與高專業門檻的挑戰。
    為降低對人工標註的依賴,本研究提出一套應用於2D 醫學影像之自
    監督式對比學習架構,改良自3D VoCo(Volume Contrastive Learning
    Framework)而來,並整合序列建模技術,提升模型於腹部創傷分類任務
    中的辨識能力。
    本研究探討改良自3D VoCo 的2D 自監督對比學習方法於腹部CT
    影像分類任務中的應用。透過在公開腹部資料集進行切片級對比預訓練,
    學習切片間的語意結構,再將主幹架構遷移至RSNA 2023 資料集,執行
    多器官與單器官損傷分類。下游模型結合CNN-LSTM 架構以捕捉切片
    序列關聯,並透過多組消融實驗驗證對比策略之效益。
    實驗結果顯示,在多器官分類任務中也有不錯的成效,在資料標註有
    限的醫學場景下,能有效捕捉空間語意關聯並提升分類性能,證實VoCo
    框架在腹部CT 分析領域具備實務可行性與應用潛力,並提出了未來改
    進的方向,以進一步提升模型的實用性和泛化能力。這些結果表明2D
    VoCo 方法在醫療影像結合深度學習領域具有廣泛的應用前景和強大的
    擴展能力。;In the field of medical image analysis, the performance of deep learning
    models heavily depends on large-scale, high-quality annotated datasets.
    However, medical annotations often face high costs and require specialized
    expertise. To reduce reliance on manual labeling, this study proposes a selfsupervised
    contrastive learning framework tailored for 2D medical imaging,
    adapted from the 3D Volume Contrastive Learning Framework (VoCo),
    and integrates sequence modeling techniques to enhance performance in
    abdominal trauma classification tasks.
    This study explores the application of the improved 2D VoCo method
    on abdominal CT image classification. By conducting slice-level contrastive
    pretraining on publicly available abdominal datasets, the model learns semantic
    structures across slices and transfers the pretrained backbone to
    the RSNA 2023 dataset for downstream multi-organ and single-organ injury
    classification tasks. The downstream model adopts a CNN-LSTM
    architecture to capture spatial-temporal correlations across slices, and a
    series of ablation studies are conducted to validate the effectiveness of the
    proposed contrastive strategy.
    Experimental results show that the proposed approach achieves promising
    performance even in multi-organ classification settings. Under limited
    annotation scenarios, the method effectively captures spatial-semantic dependencies
    and improves classification accuracy. These findings demonstrate
    the practical feasibility and application potential of the VoCo framework
    for abdominal CT analysis, and suggest directions for further improvement
    to enhance model generalizability and utility. Overall, the 2D
    VoCo method exhibits strong potential and scalability for medical image
    analysis in combination with deep learning.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML12View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明