English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41636613      線上人數 : 1146
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/88151


    題名: 使用深度學習結合快速資 料密度泛函轉換進行自動腦瘤切割;Automatic Brain Tumor Segmentation Using Deep Learning With Fast Data Density Functional Transform
    作者: 楊惟安;Yang, Wei-An
    貢獻者: 生物醫學工程研究所
    關鍵詞: 自動腦瘤切割;深度學習;快速資料密度泛函轉換;Automatic Brain Tumor Segmentation;Deep Learning;Fast Data Density Functional Transform
    日期: 2022-06-30
    上傳時間: 2022-07-13 18:12:38 (UTC+8)
    出版者: 國立中央大學
    摘要: 對於提高治療的可能性和提高患者的生存率來說,腦瘤的早期診斷扮演重要角色,而醫學影像處理中腦瘤的分割對其治療和預防至關重要。此外手動對從臨床端取得的大量 核磁共振造影(MRI)影像標記腦瘤位置並進行疾病診斷是一項艱鉅且耗時的任務,因此開始有了對自動腦腫瘤圖像分割工具的需求。在本文中,我們透過研究圖像的拓樸異質性,提出了一種非監督式的影像處理方法並與新穎的深度學習模型結合進行自動腦腫瘤圖像分割。為了同時解決基於主動輪廓的模型的問題,同時繼承其緊湊建模形式的特點,我們利用以神經網路為基礎的學習機制來改良原始的資料密度泛函轉換(DDFT)。首先,我們通過導入快速傅立葉來代替原始的系統能量估計,在數學上增強了 DDFT 的計算性能。另外我們在DDFT中加入了如同梯度下降法的學習機制來更新DDFT中的能量項。在快速資料密度泛函轉換(fDDFT)框架下,圖像大小為256*256腫瘤影像分割的平均計算時間為0.09 秒,除了計算效能的提升,定位未知腫瘤區的能力也展現了fDDFT的獨有特性。在研究中我們將 fDDFT 作為影像的預處理方法並和稱為維度融合U-Net的3D編碼器-解碼器架構結合,構建了一個用於自動腦瘤切割任務的深度學習模型,實現了具有競爭力的模型表現,對於BraTS2020所提供的公開資料集,我們在全腫瘤、腫瘤核、增強腫瘤和腫瘤水腫的Dice分數分別為92.21、87.60、86.59和83.62。最後fDDFT 提取影像特徵的靈活性以計算效能展現了該方法可以再擴展的可能性。;Early diagnosis of brain tumors plays a vital role in improving treatment possibilities and increases the survival rate of the patients. Hence, the semantic segmentation of a brain tumor in medical image processing is paramount for its treatment and prevention of recurrence. In addition, labeling the brain tumors by hand for disease diagnoses from many magnetic resonance images generated in the clinical routine is a complex and time-consuming task. There is a need for automatic brain tumor image segmentation. In the article, we propose a semi-unsupervised preprocessing method combined with a novel deep learning model for automatic brain tumor image segmentation by studying the topological heterogeneity of images. To simultaneously solve the problems of active contour-based models while reducing computational complexity during parameter training, we integrate the benefits of these techniques by combining the learning process from neural network-based models with the data density functional transform (DDFT). First, we mathematically reinforce the computational performance of DDFT by introducing the fast Fourier transform to replace the original energy estimations of a data system. Then we utilize a learning process like gradient descent to modify the system energy of a data system adaptively. Under the framework of fast DDFT (fDDFT), the average computational time for each image, whose size is 256*256 pixels, in the BraTS2020 dataset is about 0.09 seconds. The ability to localize unknown areas, which is the primary purpose of this research, also shows the unique capability of fDDFT framework. Furthermore, we combine fDDFT and a 3D encoder-decoder architecture called dimension fusion U-Net to build a robust deep learning pipeline that achieves competitive performance with Dice scores of 92.21, 87.60, 86.59, 83.62 for the whole tumor, tumor core, enhancing tumor and edema, respectively. The flexibility of fDDFT for extracting features from anywhere of images, along with its computational simplicity, reveals the possibility of model extension.
    顯示於類別:[生物醫學工程研究所 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML91檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明