博碩士論文 111522081 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:90 、訪客IP:18.188.96.17
姓名 蕭名誼(Ming-Yi Hsiao)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 基於深度學習之胰臟分割方法
(The Development of Deep Learning-based Pancreas Segmentation Methods)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本研究探討了基於深度學習的胰臟分割方法。隨著醫學影像技術的不斷進步,胰臟分割在醫學診斷中的應用變得越來越重要。由於胰臟位於腹腔深處,其紋理特徵複雜且變異性大,使得自動分割面臨巨大挑戰。
本研究提出了一種結合多種影像前處理技術、深度學習模型優化及後處理技術的 3D 胰臟分割方法。這種方法主要包括對 3D U-Net 模型架構的改進,並加入卷積塊注意力模組 (Convolutional block attention module, CBAM)、深度監督 (Deep Supervision) 和類金字塔池化 (Pyramid-like Pooling) 等技術。此外,藉由使用限制對比度自適應直方圖均衡化來增強影像對比度 (CLAHE),Coarsed Segmentation Network 進行初步定位,並且以影像平滑化機制及雜訊過濾作為影像邊界修正後處理,以提高胰臟分割準確率。
實驗結果顯示,本研究之方法於相同流程中在NIH TCIA Pancreas-CT 資料集上,並且以交叉驗證方法,取得了比其他 CNN-based 3D 模型更高的分割準確率。
摘要(英) In this study, we investigated a deep learning-based pancreas segmentation method. With the continuous progress of medical imaging technology, the application of pancreas segmentation in medical diagnosis has become increasingly important. Since the pancreas is located deep in the abdominal cavity, its texture features are complex and highly variable, which makes automatic segmentation a huge challenge.

In this study, we proposed a 3D pancreas segmentation method that combines various image pre-processing techniques, deep learning model optimization, and post-processing techniques.
This method primarily involves the improvement of the 3D U-Net model architecture and the addition of techniques such as Convolutional block attention module (CBAM), Deep Supervision, and Pyramid-like Pooling.
In addition, we enhanced image contrast by using Constrained Contrast Adaptive Histogram Equalization (CLAHE), Coarsed Segmentation Network for the preliminary localization,
and an image smoothing mechanism and noise filtering are used as post-processing of image boundary correction to improve the accuracy of pancreas segmentation.

The experimental results show that the method in this study achieves higher segmentation accuracy than other CNN-based 3D models in the same process on the NIH TCIA Pancreas-CT dataset with the K-Fold cross-validation method.
關鍵字(中) ★ 胰臟
★ 深度學習
★ 影像處理
★ 醫學影像
★ 影像分割
★ 電腦視覺
關鍵字(英) ★ Pancreas
★ Deep Learning
★ Image Processing
★ Medical Image Processing
★ Image Segmentation
★ Computer Vision
論文目次 摘要iv
Abstract v
誌謝vii
目錄ix
一、緒論1
1.1 研究動機.................................................................. 1
1.2 研究目的.................................................................. 3
1.3 論文架構.................................................................. 4
二、背景知識以及文獻回顧5
2.1 背景知識.................................................................. 5
2.1.1 電腦斷層掃描................................................... 5
2.1.2 DICOM 資料類型............................................... 6
2.1.3 胰臟結構......................................................... 9
2.2 文獻回顧.................................................................. 11
2.2.1 對於2D 醫學影像分割之研究............................... 11
2.2.2 對於3D 醫學影像分割之研究............................... 12
2.2.3 對於Pancreas-CT 影像分割之研究......................... 13
2.2.4 對於NIH TCIA Pancreas-CT 資料集之研究............... 15
三、研究方法17
3.1 系統架構.................................................................. 17
3.2 資料前處理............................................................... 18
3.2.1 Resample.......................................................... 18
3.2.2 目標定位......................................................... 19
3.2.3 擷取HU 值範圍................................................. 20
3.2.4 限制對比度自適應直方圖均衡化作影像增強............ 23
3.3 模型優化與組合......................................................... 24
3.3.1 3D U-Net ......................................................... 24
3.3.2 CBAM block 的模型組合..................................... 25
3.3.3 Pyramid-like Pooling............................................ 27
3.4 預測結果後處理......................................................... 30
3.4.1 邊界平滑化...................................................... 30
3.4.2 Conditional random field (CRF) 作資料後處理............ 31
3.4.3 雜訊去除......................................................... 33
四、實驗設計與結果34
4.1 基本介紹.................................................................. 34
4.1.1 資料集............................................................ 34
4.1.2 評估方法......................................................... 35
4.1.3 Baseline 與模型訓練........................................... 36
4.2 Resample 及定位對影像分割之效果................................. 37
4.2.1 影像壓縮及定位方法.......................................... 37
4.2.2 Bounding Box IOU 及定位效果.............................. 37
4.2.3 定位結果與分析................................................ 38
4.3 CLAHE 對影像分割之效果............................................ 40
4.4 Baseline 模型分割效果比較........................................... 42
4.4.1 Baseline 模型之效果比較..................................... 42
4.4.2 Ablation Study ................................................... 44
4.5 後處理對影像分割之效果............................................. 46
4.5.1 預測結果經後處理之效果比較.............................. 46
4.5.2 後處理於胰臟分割的效果之探討........................... 47
4.6 相關文獻比較............................................................ 50
4.6.1 模型於相同流程之比較....................................... 50
4.6.2 資料切割方法效果之探討.................................... 53
五、總結55
5.1 結論........................................................................ 55
5.2 未來展望.................................................................. 56
參考文獻57
參考文獻 [1] 衛生福利部統計處. “111 年國人死因統計結果,” 衛生福利部統計處. (Jun. 12, 2023), [Online]. Available: https://www.mohw.gov.tw/cp-16-74869-1.html (visited on 05/06/2024).
[2] National Institute of Biomedical Imaging and Bioengineering. “Computed tomography (CT),” National Institute of Biomedical Imaging and Bioengineering. (Jun. 2022), [Online]. Available: https://www.nibib.nih.gov/science-education/science-topics/computed-tomography-ct (visited on 05/07/2024).
[3] Radiopaedia.org. “Abdominal and pelvic CT.” (Apr. 1, 2024), [Online]. Available: https://www.radiologyinfo.org/en/info/abdominct (visited on 05/27/2024).
[4] S. Suri, S. Gupta, and R. Suri, “Computed tomography in abdominal tuberculosis.,” British Journal of Radiology, vol. 72, no. 853, pp. 92–98, May 2014.
[5] R. J. Alfidi, J. Haaga, T. F. Meaney, et al., “Computed tomography of the thorax and abdomen; a preliminary report,” Radiology, vol. 117, no. 2, pp. 257–264, Nov. 1975.
[6] Johns Hopkins Medicine. “Magnetic resonance imaging (MRI),” [Online]. Available: https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/magnetic-resonance-imaging-mri (visited on 05/07/2024).
[7] K. Doi, “Computer-aided diagnosis in medical imaging: Historical review, current status and future potential,” Computerized Medical Imaging and Graphics, vol. 31, no. 4, pp. 198–211, 2007.
[8] H. Cao, Y. Wang, J. Chen, et al., “Swin-unet: Unet-like pure transformer for medical image segmentation,” in European conference on computer vision, Springer, 2022, pp. 205–218.
[9] A. Hatamizadeh, Y. Tang, V. Nath, et al., “Unetr: Transformers for 3d medical image segmentation,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2022, pp. 574–584.
[10] Z. Huang, H. Wang, Z. Deng, et al., “Stu-net: Scalable and transferable medical image segmentation models empowered by large-scale supervised pre-training,” arXiv preprint arXiv:2304.06716, 2023.
[11] S. Chen, K. Ma, and Y. Zheng, “Med3d: Transfer learning for 3d medical image analysis,” arXiv preprint arXiv:1904.00625, 2019.
[12] O. Oktay, J. Schlemper, L. L. Folgoc, et al., “Attention u-net: Learning where to look for the pancreas,” arXiv preprint arXiv:1804.03999, 2018.
[13] J. Chen, Y. Lu, Q. Yu, et al., “Transunet: Transformers make strong encoders for medical image segmentation,” arXiv preprint arXiv:2102.04306, 2021.
[14] H. R. Roth, H. Oda, X. Zhou, et al., “An application of cascaded 3d fully convolutional networks for medical image segmentation,” Computerized Medical Imaging and Graphics, vol. 66, pp. 90–99, 2018.
[15] Q. Yu, L. Xie, Y. Wang, Y. Zhou, E. K. Fishman, and A. L. Yuille, “Recurrent saliency transformation network: Incorporating multi-stage visual cues for small organ segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8280–8289.
[16] F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “Nnu-net: A self-configuring method for deep learning-based biomedical image segmentation,” Nature methods, vol. 18, no. 2, pp. 203–211, 2021.
[17] Digital Imaging and Communications in Medicine. “About DICOM: Overview,” [Online]. Available: https://www.dicomstandard.org/about (visited on 05/27/2024).
[18] LEAD Tools - The World Leader In Imaging SDKs. “Overview: Basic dicom file structure,” [Online]. Available: https://www.leadtools.com/help/sdk/v20/dicom/api/overview-basic-dicom-file-structure.html (visited on 05/27/2024).
[19] Digital Imaging and Communications in Medicine. “The data set,” [Online]. Available: https://dicom.nema.org/dicom/2013/output/chtml/part05/chapter_7.html (visited on 05/27/2024).
[20] Digital Imaging and Communications in Medicine. “Illustration of the overall directory organization,” [Online]. Available: https ://dicom.nema.org/medical/dicom/current/output/chtml/part03/sect_F.2.2.html#sect_F.2.2.1 (visited on 05/28/2024).
[21] S. S. Talathi, R. Zimmerman, and M. Young, “Anatomy, abdomen and pelvis, pancreas,”in StatPearls, Treasure Island (FL): StatPearls Publishing, 2023.
[22] M. Karpińska and M. Czauderna, “Pancreas-its functions, disorders, and physiological impact on the mammals’ organism,” Front. Physiol, vol. 13, p. 807 632, Mar. 2022.
[23] F. Campbell and C. S. Verbeke, “Embryology, anatomy, and histology,” in Pathology of the Pancreas: A Practical Approach. Cham: Springer International Publishing, 2021, pp. 3–23.
[24] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440
[25] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science, ser. Lecture notes in computer science, Cham: Springer International Publishing, 2015, pp. 234–241.
[26] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017.
[27] Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Work-shop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4, Springer, 2018, pp. 3–11.
[28] H. Huang, L. Lin, R. Tong, et al., “Unet 3+: A full-scale connected unet for medical image segmentation,” in ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, 2020, pp. 1055–1059.
[29] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D u-net: Learning dense volumetric segmentation from sparse annotation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, ser. Lecture notes in computer science, Cham: Springer International Publishing, 2016, pp. 424–432.
[30] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 fourth international conference on 3D vision (3DV), Ieee, 2016, pp. 565–571.
[31] P. Hu, X. Li, Y. Tian, et al., “Automatic pancreas segmentation in ct images with distance-based saliency-aware denseaspp network,” IEEE journal of biomedical and health informatics, vol. 25, no. 5, pp. 1601–1611, 2020.
[32] R. O. Dogan, H. Dogan, C. Bayrak, and T. Kayikcioglu, “A two-phase approach using mask r-cnn and 3d u-net for high-accuracy automatic segmentation of pancreas in ct imaging,” Computer Methods and Programs in Biomedicine, vol. 207, p. 106 141, 2021.
[33] S.-H. Lim, Y. J. Kim, Y.-H. Park, D. Kim, K. G. Kim, and D.-H. Lee, “Automated pancreas segmentation and volumetry using deep neural network on computed tomography,” Scientific Reports, vol. 12, no. 1, p. 4075, 2022.
[34] Y. Deng, L. Lan, L. You, et al., “Automated ct pancreas segmentation for acute pancre- atitis patients by combining a novel object detection approach and u-net,” Biomedical signal processing and control, vol. 81, p. 104 430, 2023.
[35] H. R. Roth, L. Lu, N. Lay, et al., “Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation,” Medical image analysis, vol. 45, pp. 94–107, 2018.
[36] T. D. DenOtter and J. Schubert, Hounsfield unit. Treasure Island (FL): StatPearls Publishing, Mar. 6, 2023.
[37] K. Greenway, R. Sharma, and V. C. D. “Hounsfield unit,” Radiopaedia. (Jul. 9, 2015), [Online]. Available: https : / / radiopaedia . org / articles / hounsfield - unit (visited on 05/07/2024).
[38] M. H. Lev and R. G. Gonzalez, “17 - ct angiography and ct perfusion imaging,” in Brain Mapping: The Methods (Second Edition), A. W. Toga and J. C. Mazziotta, Eds., Second Edition, San Diego: Academic Press, 2002, pp. 427–484.
[39] K. J. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics gems, 1994.
[40] A. Hatamizadeh, V. Nath, Y. Tang, D. Yang, H. R. Roth, and D. Xu, “Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images,” in International MICCAI Brainlesion Workshop, Springer, 2021, pp. 272–284.
[41] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3–19.
[42] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, pp. 1904–1916, 2015.
[43] P. Krähenbühl and V. Koltun, “Efficient inference in fully connected crfs with gaussian edge potentials,” CoRR, vol. abs/1210.5644, 2012.
[44] J. Lafferty, A. Mccallum, and F. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” Jan. 2001, pp. 282–289.
[45] B. A. Galler and M. J. Fisher, “An improved equivalence algorithm,” Commun. ACM, vol. 7, no. 5, pp. 301–303, May 1964.
[46] H. Roth, A. Farag, E. B. Turkbey, L. Lu, J. Liu, and R. M. Summers, Data from Pancreas-CT, 2016
指導教授 蘇木春(Mu-Chun Su) 審核日期 2024-8-12
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明