博碩士論文 110521158 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:56 、訪客IP:3.145.90.119
姓名 姜智桓(Zhi-Huan Jiang)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於三維多尺度卷積神經網路自動分割與量化腦海綿狀血管瘤
(Automated Cerebral Cavernous Malformation Segmentation and Quantification Using 3D Multi-scale Convolutional Neural Networks)
相關論文
★ 電子式基因序列偵測晶片之原型★ 眼動符號表達系統之可行性研究
★ 利用網印碳電極以交流阻抗法檢測糖化血紅素★ 電子式基因序列偵測晶片可行性之研究
★ 電腦化肺音擷取系統★ 眼寫鍵盤和眼寫滑鼠
★ 眼寫電話控制系統★ 氣喘肺音監測系統之可行性研究
★ 肺音聽診系統之可行性研究★ 穿戴式腳趾彎曲角度感測裝置之可行性研究
★ 注音符號眼寫系統之可行性研究★ 英文字母眼寫系統之可行性研究
★ 數位聽診器之原型★ 使用角度變化率為基準之心電訊號壓縮法
★ 電子式基因微陣列晶片與應用電路研究★ 電子聽診系統應用於左右肺部比較之臨床研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2025-8-1以後開放)
摘要(中) 腦海綿狀血管瘤(cerebral cavernous malformations, CCM),是腦部中的一種血管病變,由良性不正常的血管組成,在腦部某一位置的血管膨脹成團。在T2權重影像上會有低信號hypointensity(黑色)的邊緣,本體可能因為反覆出血而呈現多囊狀有如爆米花(popcorn)般的型態。目前腦海綿狀血管瘤的診斷,主要依賴醫師的目視判讀及手動標記,但人眼判讀的方法容易受外界環境、視覺疲勞影響,手動標記耗時費力,因此擁有一客觀工具來改善診斷的準確性和效率為當前需求。本文提出一種深度學習方式,在T2權重影像上自動分割及量化腦海綿狀血管瘤。首先,使用Mask Region based Convolution Neural Networks (Mask RCNN)對T2權重影像進行實質腦提取,去除頭骨、頭皮以及背景雜訊,目的為提高分割效率於實質腦範圍內的腦海綿狀血管瘤,接著對影像進行強度標準化、體素尺寸重採樣以及資料增量等影像前處理,最後使用Deepmedic多尺度3D卷積神經網路在實質腦範圍進行腦海綿狀血管瘤的分割與量化。本研究使用的資料來源為臺北榮民總醫院192筆T2權重影像,資料被隨機劃分五分之三為訓練集、五分之一為驗證集及五分之一為測試集。目前訓練模型用於腦海綿狀血管瘤自動分割在測試集上取得的模型評估指標,平均Dice、精確率(Precision)、召回率(Recall)分別為0.736、0.807和0.729。此結果顯示了所提出的深度學習方法在自動腦海綿狀血管瘤分割方面的有效性。此系統的開發提供了一種客觀工具,以提高腦海綿狀血管瘤診斷的準確性和效率。
摘要(英) Cerebral cavernous malformations (CCM) are vascular abnormalities in the brain characterized by benign clusters of abnormal blood vessels. Magnetic resonance imaging (MRI) is a diagnostic tool used by physicians to detect and assess the size of CCM. In T2-Weighted (T2W) images, there may be hypointensity (dark) edges, and the lesion itself may appear as a multicystic structure resembling popcorn-like morphology, possibly due to recurrent seizures. Currently, the diagnosis of CCM heavily relies on visual interpretation and manual delineation by physicians. However, these methods are subjective, prone to environmental factors and visual fatigue, and time-consuming. Therefore, there is a current need for an objective tool to improve the accuracy and efficiency of diagnosis. To address these challenges, we proposed a deep learning-based approach for automated segmentation and quantification of CCM on T2W. Firstly, a Mask Region based Convolution. Neural Networks (Mask RCNN) model is employed to extract the brain region from the T2W, removing skull, scalp, and background noise to improve segmentation efficiency within the brain region. The images are then subjected to preprocessing steps including intensity normalization, voxel size resampling, and data augmentation. Finally, a Deepmedic multi-scale 3D convolutional neural network (CNN) is used to perform CCM segmentation and quantification within the extracted brain region. The dataset used in this study consists of 192 T2W from Taipei Veterans General Hospital, which are randomly divided into training (3/5), validation (1/5), and testing (1/5) sets. The trained model for CCM segmentation achieved the following evaluation metrics on the testing set: average Dice coefficient of 0.736, precision of 0.807, and recall of 0.729. The results demonstrate the effectiveness of the proposed deep learning approach in automated CCM segmentation. The developed system provides an objective tool to improve the accuracy and efficiency of CCM diagnosis.
關鍵字(中) ★ 腦海綿狀血管瘤
★ 磁振造影
★ 深度學習
★ 自動分割
★ 3D卷積神經網路
關鍵字(英) ★ Cerebral cavernous malformation
★ Magnetic resonance imaging
★ Deep Learning
★ Segmentation
★ 3D convolutional neural network
論文目次 摘要 i
ABSTRACT iii
CONTENTS v
LIST OF FIGURES viii
LIST OF TABLES x
CHAPTER 1 Introduction 1
1.1 Cerebral Cavernous Malformation 1
1.2 Clinical Needs 1
1.3 Automatic Segmentation of CCM 2
CHAPTER 2 Literature review 3
2.1 Related Literature on CCM Segmentation 3
2.2 Literature on Brain Tumor Segmentation 3
2.3 Literature on Brain Extraction 7
CHAPTER 3 Materials and Methods 9
3.1 T2 Weighted Image 10
3.2 Dataset 10
3.3 MRI Protocol 12
3.4 K-fold Cross-Validation 12
3.5 Brain Extraction 13
3.5.1 Gold Standard 14
3.5.2 Mask Region based Convolution Neural Networks (Mask RCNN) 15
3.6 Pre-processing 16
3.6.1 Voxel size Resampling 17
3.6.2 Intensity Normalization 17
3.6.3 Data Augmentation 18
3.7 CCM Segmentation 18
3.7.1 Deepmedic 18
3.7.2 Hyperparameters 19
3.8 Performance Evaluation 20
CHAPTER 4 Experimental Results 23
4.1 Brain Extraction 23
4.2 CCM Segmentation 25
4.2.1 Voxel-level 26
4.2.2 Number-level 29
4.3 Graphical User Interface 31
CHAPTER 5 Discussion 32
5.1 CCM Segmentation and Quantification 32
5.2 Special Cases in Data 32
5.2.1 Conservative Treatment of CCM 33
5.2.2 Selected Treatment of CCM 34
5.2.3 CCM with Hemorrhage 35
5.2.4 Expanded Treatment of CCM 36
5.2.5 Revised Ground Truth with T1-Weighted Contrast-Enhanced (T1C) 38
5.3 T2W vs. T2W + T1C 39
5.3.1 Preparation of T1C 39
5.3.2 CCM Segmentation Comparison 41
5.3.3 Model Selection for CCM Segmentation 44
5.4 Result Comparison with Literature 46
5.5 Clinical Application for Seizure Incidence Prediction 48
CHAPTER 6 Conclusion and Future Work 50
6.1 Conclusion 50
6.2 Future Work 50
REFERENCES 51
APPENDIX 55
A. Environment 55
B. Interface & Button Introduction 56
a. Main Window 56
b. Load 56
c. RUN 57
d. EXPORT 57
e. SNAPSHOT 58
C. Information Bar 59
a. Patients Information 59
1. ID 59
2. Name 59
3. Gender 60
4. DOB 60
5. GK Date 60
b. Diagnosis Information 60
c. Results Information 61
D. Main Display Information 62
a. T2W Display 62
b. CCM Segmentation Display 63
E. Mouse Function Introduction 63
a. Slider 63
F. Operation Steps 64
a. LOAD Step 64
b. LOAD Step done 65
c. RUN Step 65
d. RUN Step done 66
e. EXPORT Step 67
f. EXPORT Step done 68
g. SNAPSHOT Step 69
h. SNAPSHOT Step done 70
參考文獻 [1] N. Mouchtouris et al., "Management of cerebral cavernous malformations: from diagnosis to treatment," vol. 2015, 2015.
[2] P. J. Porter, R. A. Willinsky, W. Harper, and M. C. J. J. o. n. Wallace, "Cerebral cavernous malformations: natural history and prognosis after clinical deterioration with or without hemorrhage," vol. 87, no. 2, pp. 190-197, 1997.
[3] R. T. Dalyai et al., "Management of incidental cavernous malformations: a review," vol. 31, no. 6, p. E5, 2011.
[4] R. S. Fisher et al., "ILAE official report: a practical clinical definition of epilepsy," vol. 55, no. 4, pp. 475-482, 2014.
[5] H. Wang, S. N. Ahmed, M. J. C. M. I. Mandal, and Graphics, "Computer-aided diagnosis of cavernous malformations in brain MR images," vol. 66, pp. 115-123, 2018.
[6] V. Duay, X. Bresson, J. S. Castro, C. Pollo, M. B. Cuadra, and J.-P. Thiran, "An active contour-based atlas registration model applied to automatic subthalamic nucleus targeting on MRI: method and validation," in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2008: 11th International Conference, New York, NY, USA, September 6-10, 2008, Proceedings, Part II 11, 2008, pp. 980-988: Springer.
[7] H. Abdi and L. J. J. W. i. r. c. s. Williams, "Principal component analysis," vol. 2, no. 4, pp. 433-459, 2010.
[8] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, B. J. I. I. S. Scholkopf, and t. applications, "Support vector machines," vol. 13, no. 4, pp. 18-28, 1998.
[9] P. Sharma and A. P. Shukla, "A review on brain tumor segmentation and classification for MRI images," in 2021 International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), 2021, pp. 963-967: IEEE.
[10] A. Tiwari, S. Srivastava, and M. J. P. R. L. Pant, "Brain tumor segmentation and classification from magnetic resonance images: Review of selected methods from 2014 to 2019," vol. 131, pp. 244-260, 2020.
[11] A. Wadhwa, A. Bhardwaj, and V. S. J. M. r. i. Verma, "A review on brain tumor segmentation of MRI images," vol. 61, pp. 247-259, 2019.
[12] B. H. Menze et al., "The multimodal brain tumor image segmentation benchmark (BRATS)," vol. 34, no. 10, pp. 1993-2024, 2014.
[13] S. Pereira, A. Pinto, V. Alves, and C. A. J. I. t. o. m. i. Silva, "Brain tumor segmentation using convolutional neural networks in MRI images," vol. 35, no. 5, pp. 1240-1251, 2016.
[14] M. Havaei et al., "Brain tumor segmentation with deep neural networks," vol. 35, pp. 18-31, 2017.
[15] N. J. Tustison et al., "Optimal symmetric multimodal templates and concatenated random forests for supervised brain tumor segmentation (simplified) with ANTsR," vol. 13, pp. 209-225, 2015.
[16] L. J. M. l. Breiman, "Random forests," vol. 45, pp. 5-32, 2001.
[17] G. R. Cross, A. K. J. I. T. o. P. A. Jain, and M. Intelligence, "Markov random field texture models," no. 1, pp. 25-39, 1983.
[18] M. Soltaninejad et al., "Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels," vol. 157, pp. 69-84, 2018.
[19] L. Zhao, D. Sarikaya, and J. J. J. M. B. T. S. Corso, "Automatic brain tumor segmentation with MRF on supervoxels," vol. 51, pp. 51-54, 2013.
[20] C. Sutton, A. J. F. McCallum, and T. i. M. Learning, "An introduction to conditional random fields," vol. 4, no. 4, pp. 267-373, 2012.
[21] S. Bakas et al., "GLISTRboost: combining multimodal MRI segmentation, registration, and biophysical tumor growth modeling with gradient boosting machines for glioma segmentation," in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: First International Workshop, Brainles 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Revised Selected Papers 1, 2016, pp. 144-155: Springer.
[22] H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, "Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks," in Medical Image Understanding and Analysis: 21st Annual Conference, MIUA 2017, Edinburgh, UK, July 11–13, 2017, Proceedings 21, 2017, pp. 506-517: Springer.
[23] K. Kamnitsas et al., "Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation," vol. 36, pp. 61-78, 2017.
[24] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, "Deep convolutional neural networks for the segmentation of gliomas in multi-sequence MRI," in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: First International Workshop, Brainles 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Revised Selected Papers 1, 2016, pp. 131-143: Springer.
[25] X. Zhao, Y. Wu, G. Song, Z. Li, Y. Zhang, and Y. J. M. i. a. Fan, "A deep learning model integrating FCNNs and CRFs for brain tumor segmentation," vol. 43, pp. 98-111, 2018.
[26] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, "3D U-Net: learning dense volumetric segmentation from sparse annotation," in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19, 2016, pp. 424-432: Springer.
[27] T. R. Patel et al., "Multi-resolution CNN for brain vessel segmentation from cerebrovascular images of intracranial aneurysm: a comparison of U-Net and DeepMedic," in Medical Imaging 2020: Computer-Aided Diagnosis, 2020, vol. 11314, pp. 677-685: SPIE.
[28] P.-Y. Kao, T. Ngo, A. Zhang, J. W. Chen, and B. Manjunath, "Brain tumor segmentation and tractographic feature extraction from structural MR images for overall survival prediction," in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part II 4, 2019, pp. 128-141: Springer.
[29] A. Humera, T. J. M. G. Humera, and Vision, "Skull stripping using traditional and soft-computing approaches for magnetic resonance images: a semi-systematic meta-analysis," vol. 29, no. 1/4, 2020.
[30] A. Kharb and P. J. W. Chaudhary, "A Review On Skull Stripping Techniques Of Brain MRI Images," vol. 18, no. 6, 2021.
[31] H. Z. U. Rehman, H. Hwang, and S. J. A. S. Lee, "Conventional and deep learning methods for skull stripping in brain MRI," vol. 10, no. 5, p. 1773, 2020.
[32] V. P. Grover et al., "Magnetic resonance imaging: principles and techniques: lessons for clinicians," vol. 5, no. 3, pp. 246-255, 2015.
[33] Y. J. J. o. N. S. Jung, "Multiple predicting K-fold cross-validation for model selection," vol. 30, no. 1, pp. 197-215, 2018.
[34] M. M. Badža and M. Č. J. A. S. Barjaktarović, "Classification of brain tumors from MRI images using a convolutional neural network," vol. 10, no. 6, p. 1999, 2020.
[35] J. Ashburner et al., "SPM12 manual," vol. 2464, no. 4, 2014.
[36] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
[37] Y. Liu, B. M. J. I. j. o. b. Dawant, and h. informatics, "Automatic localization of the anterior commissure, posterior commissure, and midsagittal plane in MRI scans using regression forests," vol. 19, no. 4, pp. 1362-1374, 2015.
[38] S. Ren, K. He, R. Girshick, and J. J. A. i. n. i. p. s. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," vol. 28, 2015.
[39] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
[40] S. Targ, D. Almeida, and K. J. a. p. a. Lyman, "Resnet in resnet: Generalizing residual architectures," 2016.
[41] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117-2125.
[42] C.-J. Chou, C.-C. Lee, C.-J. Chen, H.-C. Yang, and S.-J. J. B. Peng, "Displacement of gray matter and incidence of seizures in patients with cerebral cavernous malformations," vol. 9, no. 12, p. 1872, 2021.
指導教授 蔡章仁(Jang-Zern Tsai) 審核日期 2023-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明