博碩士論文 111827008 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:181 、訪客IP:52.15.194.238
姓名 李暐灝(Li-Wei Hao)  查詢紙本館藏   畢業系所 生物醫學工程研究所
論文名稱 以幾何特徵強化方法用於腦腫瘤影像辨識與分割之研究
(Research on the Application of Geometric Feature Enhancement for Brain Tumor Image Recognition and Segmentation)
相關論文
★ 基於密度泛函理論的人體姿勢模態識別之非監督學習方法★ 舌紋分析的動態曝光方法
★ 整合Modbus與Websocket協定之聯網醫療資料採集嵌入式系統研製★ 比較 U-net 神經網路與資料密度泛函方法對於磁共振影像分割的效能
★ 使用YOLO架構在標準環境中進行動態舌頭影像偵測及切割★ 使用YOLO辨識金屬表面瑕疵
★ 使用深度學習結合快速資 料密度泛函轉換進行自動腦瘤切割★ 使用強化學習模擬抑制新冠肺炎疫情
★ 融合影像與加速度感測訊號的人體上部運動特徵視覺化之機械學習模型★ 組建細胞培養人造磁場微實驗平台
★ 標準CMOS製程之新型微機電麥克風驗證、濕式蝕刻加工製程開發暨量產製程研究★ 靜磁場於癌細胞的生物效應
★ 關節角度監測裝置應用在日常膝關節活動★ Using Reinforcement Learning to Support Outbreak Management and Spatiotemporal Analysis of COVID-19 Epidemiology in Japan
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 根據臨床統計惡性顱內腦瘤平均存活期約僅十五個月左右,兩年內的復發率更是幾乎百分之百,因此於早期進行評估、診斷和治療是非常重要的。同時於醫學影像技術中自動化的腦腫瘤辨識與分割任務的成功與否,對於醫生的診斷協助和患者疾病的診治策略扮演著重要角色。但由於醫生或醫事專家在核磁共振造影影像中手動標記腦瘤位置以及安排手術路徑與計畫需要耗費大量的時間,因此為了提高標記腦瘤位置和輪廓的準確率並同時降低醫生的時間與體力的負擔,本研究透過幾何深度學習模型訓練進行自動腦瘤影像辨識、分割與三維腫瘤體型重構。於研究中我們將使用 BraTS 2020資料集,其中資料集會先經過快速資料密度泛函轉換(fast Data Density Functional Transform)先進行腫瘤特徵強化,並結合能強化三微腦腫瘤特徵的Squeeze-and-excitation 模組於D-Unet (Dimension Fusion U- Net)的編碼器-解碼器架構(Encoder-Decoder Structure)中,以完成一系列的幾何深度學習模型並進行訓練。除了使用 D-Unet,我們也將使用其他各類深度學習模型,如nnU-Net、3D-U-net 等當代最熱門的腦腫瘤分割模型在模型計算複雜度以及Dice分割分數結果進行比較,再將Dice分數最高的模型進行改良以獲取最佳分割結果。我們的研究發現,經過快速資料密度泛函轉換之後的資料集,可以大幅縮短深度學習模型的訓練與推論時間(約在 50%以上),並且還能提供模型的分割性能。
摘要(英) Clinical statistics reveal an average survival period of approximately fifteen months for malignant intracranial tumors, with an almost 100% recurrence rate within two years. Early assessment, diagnosis, and treatment are crucial in addressing this challenging prognosis. Therefore, the success of automated brain tumor recognition and segmentation tasks in medical imaging technology plays a vital role in assisting physicians with diagnosis and guiding patient treatment strategies. Manual annotation of tumor locations in magnetic resonance imaging (MRI) images and medical professionals planning surgical pathways is time-consuming. To improve the accuracy of tumor localization and contour marking while alleviating the burden on healthcare providers, this study employs geometric deep learning models for automatic brain
tumor image recognition, segmentation, and three-dimensional tumor volume reconstruction. In our research, we will use the BraTS 2020 dataset, which first undergoes fast data density functional transformation to enhance tumor features, and
combines it with Squeeze-and-excitation, which can enhance the features of three-microbrain tumors. The module is built into the Encoder-Decoder Structure of D-Unet(Dimension Fusion U-Net) to complete and train a series of geometric deep learning
models. In addition to using D-Unet, we will also use other types of deep learning models, such as nnU-Net, 3D-U-net and other contemporary most popular brain tumor segmentation models to compare the model computational complexity and Dice
segmentation score results, and then the model with the highest Dice score is improved to obtain the best segmentation results. Our research has found that the dataset after fast
data density functional transformation can significantly shorten the training and inference time of the deep learning model (about 50% or more), and can also improve the segmentation performance of the model.
關鍵字(中) ★ 快速資料密度泛函轉換 關鍵字(英) ★ ast Data Density Functional Transform
論文目次 中文摘要........................................................................................................................ I
英文摘要 ...................................................................................................................... II
致謝…………………………………………………………………………………..IV
目錄…...................................................................................................................V
圖目錄......................................................................................................................... VI
表目錄........................................................................................................................ VII
一、 緒論.................................................................................................................. 1
1-1 腦腫瘤 ............................................................................................................. 1
1-1-1 腦瘤的定義和分類 ......................................................................................... 1
1-1-2 腦瘤的發生率和死亡率 ................................................................................. 3
1-1-3 腦瘤的症狀及診斷 ......................................................................................... 4
1-1-4 腦瘤的治療 ..................................................................................................... 5
1-2 生物影像辨識簡介和應用 ............................................................................. 5
二、 文獻探討.......................................................................................................... 9
2-1 卷積神經網路 ................................................................................................. 9
2-2 全卷積網路 ................................................................................................... 10
2-3 NNU-NET 架構 ................................................................................................ 10
2-4 3D U-NET 架構 ............................................................................................... 10
三、 研究內容與方法............................................................................................ 13
3-1 資料集 ........................................................................................................... 13
3-2 資料預處理 ................................................................................................... 14
3-2-1 快速資料密度泛函轉換 ............................................................................... 14
3-2-2 費米正規化 ................................................................................................... 16
3-2-3 全局卷積自動操作模型之架構 ................................................................... 17
3-3 深度學習網路架構 ....................................................................................... 20
3-3-1 實驗一:使用 D U-NET 實作腦腫瘤切割 ................................................... 20
3-3-2 實驗二:使用 NNU-NET 進行腦腫瘤切割 .................................................. 21
3-3-3 實驗三:使用 3D U-NET 進行腦腫瘤切割 .................................................. 22
四、 結果與討論.................................................................................................... 24
五、 結論................................................................................................................ 29
參考文獻...................................................................................................................... 30
參考文獻 參考文獻
[1] 台北榮民總醫院 腦瘤(brain tumors)治療準則
https://wd.vghtpe.gov.tw/hemaonco/files/Guide_BrainCA.pdf
[2] 中華民國 110 年癌症登記報告 CANCER REGISTRY ANNUAL REPORT,
2021 TAIWAN, 中華民國 112 年 12 月出版
[3] W. Wang, C. Chen, M. Ding, H. Yu, S. Zha, and J. Li, “TransBTS: Multimodal
brain tumor segmentation using transforme,” in Proc. Int. Conf. Med. Image Comput.
Comput.-Assist. Interv., 2021, pp. 109–119.
[4] F. Isensee, P. F. Jäger, P. M. Full, P. Vollmuth, and K. H. Maier-Hein, “nnU-Net
for brain tumor segmentation,” inProc. Int. MICCAI Brainlesion Workshop, 2021, pp.
118–132.
[5] Z. Zhang, Q. Liu, and Y. Wang, “Road extraction by deep residual u-net,” IEEE
Geosci. Remote Sens. Lett., vol. 15, no. 5, pp. 749–753, May 2018.
[6] Z. Zhou,M.M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: A
nested U-Net architecture for medical image segmentation,” in Proc. Deep Learn.
Med. Image Anal. Multimodal Learn. Clin. Decis. Support, 2018, pp. 3–11.
[7] R. McKinley, M. Rebsamen, R. Meier, and R. Wiest, “Triplanar ensemble of 3D-
to-2D CNNs with label-uncertainty for brain tumor segmentation,” in Proc. Int.
MICCAI Brainlesion Workshop, 2020, pp. 379–387.
[8] A. Vaswani et al., “Attention is all you need,” in Proc. 31st Neural Inf. Process.
Syst., 2017, pp. 5998–6008.
[9] C. Liu et al., “Auto-deeplab: Hierarchical neural architecture search for semantic
image segmentation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit.,
2019, pp. 82–92.
[10] C. Szegedy et al., “Going deeper with convolutions,” in Proc. IEEE/CVF Conf.
Comput. Vis. Pattern Recognit., 2015, pp. 1–9.
[11] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in
Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7794–7803.
[12] J. Dai et al., “Deformable convolutional networks,” in Proc. IEEE Int. Conf.
Comput. Vis., 2017, pp. 764–773.
[13] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab:
Semantic image segmentation with deep convolutional nets, atrous convolution, and
fully connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 4, pp.
834–848, Apr. 2018.
[14] L. Chi, B. Jiang, and Y. Mu, “Fast Fourier convolution,” in Proc. 34th Int. Conf.
Neural Inf. Process. Syst., 2020, pp. 4479–4488.
[15] L. Chi, G. Tian, Y. Mu, L. Xie, and Q. Tian, “Fast non-local neural networks
with spectral residual learning,” in Proc. 27th Assoc. Comput. Machinery Multimedia,
2019, pp. 2142–2151.
[16] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly
learning to align and translate,” in Proc. 3rd Int. Conf. Learn. Representations, 2015.
[17] A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image
recognition at scale,” in Proc. Int. Conf. Learn. Representations, 2020.
[18] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for
semantic segmentation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit.,
2015, pp. 3431–3440.
[19] J. Chen et al., “Transunet: Transformers make strong encoders for medical image
segmentation,” 2021, arXiv:2102.04306.
[20] M. Dobko, D.-I. Kolinko, O. Viniavskyi, and Y. Yelisieiev, “Combining CNNs
With transformer for multimodal 3D MRI brain tumor segmentation with self-
supervised pretraining,” 2021, arXiv:2110.07919.
[21] N. Watters et al., “Visual interaction networks: Learning a physics simulator
from video,” in Proc. 31st Int. Neural Inf. Process. Syst., 2017, pp. 4542–4550.
[22] M. K. Abd-Ellah, A. I. Awad, A. A. M. Khalaf, and H. F. A. Hamed, “Two-
phase multi-model automatic brain tumour diagnosis system from magnetic resonance
images using convolutional neural networks,” J. Image Video Proc., vol. 97, pp. 1–10,
Sep. 2018.
[23] M. K. Abd-Ellah, A. I. Awad, A. A. M. Khalaf, and H. F. A. Hamed, “A review
on brain tumor diagnosis from MRI images: Practical implications, key achievements,
and lessons learned,” Magn. Reson. Imag., vol. 61, pp. 300–318, Sep. 2019.
[24] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst,
“Geometric deep learning: Going beyond euclidean data,” IEEE Signal Process.
Mag., vol. 34, no. 4, pp. 18–42, Jul. 2017.
[25] K. Gopinath, C. Desrosiers, and H. Lombaert, “Graph domain adaptation for
alignment-invariant brain surface segmentation,” in Proc. Uncertainty Safe Utilization
Mach. Learn. Med. Imag., Graphs Biomed. Image Anal., 2020, pp. 152–163.
[26] J. Liu et al., “Identification of early mild cognitive impairment using multimodal
data and graph convolutional networks,” BMC Bioinf., vol. 21, no. 6, pp. 1–12, 2020.
[27] S.-J. Huang, C.-C. Chen, Y. Kao, and H. H.-S. Lu, “Feature-aware unsupervised
lesion segmentation for brain tumor images using fast data density functional
transform,” Sci. Rep., vol. 13, no. 1, Aug. 2023, Art. no. 13582.
[28] C.-C. Chen, H.-H. Juan, M.-Y. Tsai, and H. H.-S. Lu, “Unsupervised learning
and pattern recognition of biological data structures with density functional theory
and machine learning,” Sci. Rep., vol. 8, no. 1, Jan. 2018, Art. no. 557.
[29] C.-C. Chen, M.-Y. Tsai, M.-Z. Kao, and H. H.-S. Lu, “Medical image
segmentation with adjustable computational complexity using data density
functionals,” Appl. Sci.-Basel, vol. 9, no. 8, Apr. 2019, Art. no. 1718.
[30] Z.-J. Su, T.-C. Chang, Y.-L. Tai, S.-J. Chang, and C.-C. Chen, “Attention U-net
with dimension-hybridized fast data density functional theory for automatic brain
tumor image segmentation,” in Proc. Int. MICCAI Brainlesion Workshop, 2021, pp.
81–92.
[31] Y.-L. Tai, S.-J. Huang, C.-C. Chen, and H. H.-S. Lu, “Computational complexity
reduction of neural networks of brain tumor image segmentation by introducing
fermi–Dirac correction functions,” Entropy, vol. 23, no. 2, Feb. 2021, Art. no. 223.
[32] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic
segmentation,in: Proceedings of the IEEE conference on computer vision and pattern
recognition, 2015, pp.3431–3440.
[33] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for
biomedical imagesegmentation, in: International Conference on Medical image
computing and computer-assisted intervention, Springer, 2015, pp. 234–241.
[34] F. Isensee, P. F. Jäger, P. M. Full, P. Vollmuth, and K. H. Maier-Hein, “nnU-Net
for brain tumor segmentation,” inProc. Int. MICCAI Brainlesion Workshop, 2021, pp.
118–132.
[35] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-
Net: Learning dense volumetric segmentation from sparse annotation,” in Proc. Int.
Conf. Med. Image Comput. Comput.-Assist. Interv., 2016, pp. 424–432.
[36] B. H. Menze et al., “The multimodal brain tumor image segmentation benchmark
(BRATS),” IEEE Trans. Med. Imag., vol. 34, no. 10, pp. 1993–2024, Oct. 2015.
[37] S. Bakas et al., “Advancing the cancer genome atlas glioma MRI collections with
expert segmentation labels and radiomic features,” Sci. Data, vol. 4, Sep. 2017, Art.
no. 170117.
[38] S. Bakas et al., “Identifying the best machine learning algorithms for brain tumor
segmentation, progression assessment, and overall survival prediction in the BRATS
challenge,” 2018, arXiv:1811.02629.
[39] S. Bakas et al., “Segmentation labels and radiomic features for the preoperative
scans of the TCGA-GBM collection,” The Cancer Imaging Archive, 2017. [Online].
Available:
https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=24282666
[40] S. Bakas et al., “Segmentation labels and radiomic features for the pre-operative
scans of the TCGA-LGG collection,” The Cancer Imaging Archive, 2017. [Online].
Available:
https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=24282668
[41] U. Baid et al., “The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain
tumor segmentation and radiogenomic classification,” 2021, arXiv:2107.02314.
[42] J. Phys. Chem. 1996, 100, 31, 12974–12980 Publication Date:August 1, 1996
https://doi.org/10.1021/jp960669l Copyright © 1996 American Chemical Society
RIGHTS &PERMISSIONS
[43] Y. Zhou, W. Huang, P. Dong, Y. Xia, and S. Wang, “D-UNet: A dimension-
fusion U shape network for chronic stroke lesion segmentation,” IEEE/ACM Trans.
Comput. Biol. Bioinf., vol. 18, no. 3, pp. 940–950, May/Jun. 2021.
[44] P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruning convolutional
neural networks for resource efficient inference,” in Proc. Int. Conf. Learn.
Representations, 2017, pp. 1–17.
[45] F. Isensee, P. F. Jäger, P. M. Full, P. Vollmuth, and K. H. Maier-Hein, “nnU-Net
for brain tumor segmentation,” inProc. Int. MICCAI Brainlesion Workshop, 2021, pp.
118–132.
指導教授 陳健章(Chien-Chang Chen) 審核日期 2024-8-16
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明