參考文獻 |
[1] 衛生福利部統計處. “111 年國人死因統計結果,” 衛生福利部統計處. (Jun. 12, 2023), [Online]. Available: https://www.mohw.gov.tw/cp-16-74869-1.html (visited on 05/06/2024).
[2] National Institute of Biomedical Imaging and Bioengineering. “Computed tomography (CT),” National Institute of Biomedical Imaging and Bioengineering. (Jun. 2022), [Online]. Available: https://www.nibib.nih.gov/science-education/science-topics/computed-tomography-ct (visited on 05/07/2024).
[3] Radiopaedia.org. “Abdominal and pelvic CT.” (Apr. 1, 2024), [Online]. Available: https://www.radiologyinfo.org/en/info/abdominct (visited on 05/27/2024).
[4] S. Suri, S. Gupta, and R. Suri, “Computed tomography in abdominal tuberculosis.,” British Journal of Radiology, vol. 72, no. 853, pp. 92–98, May 2014.
[5] R. J. Alfidi, J. Haaga, T. F. Meaney, et al., “Computed tomography of the thorax and abdomen; a preliminary report,” Radiology, vol. 117, no. 2, pp. 257–264, Nov. 1975.
[6] Johns Hopkins Medicine. “Magnetic resonance imaging (MRI),” [Online]. Available: https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/magnetic-resonance-imaging-mri (visited on 05/07/2024).
[7] K. Doi, “Computer-aided diagnosis in medical imaging: Historical review, current status and future potential,” Computerized Medical Imaging and Graphics, vol. 31, no. 4, pp. 198–211, 2007.
[8] H. Cao, Y. Wang, J. Chen, et al., “Swin-unet: Unet-like pure transformer for medical image segmentation,” in European conference on computer vision, Springer, 2022, pp. 205–218.
[9] A. Hatamizadeh, Y. Tang, V. Nath, et al., “Unetr: Transformers for 3d medical image segmentation,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2022, pp. 574–584.
[10] Z. Huang, H. Wang, Z. Deng, et al., “Stu-net: Scalable and transferable medical image segmentation models empowered by large-scale supervised pre-training,” arXiv preprint arXiv:2304.06716, 2023.
[11] S. Chen, K. Ma, and Y. Zheng, “Med3d: Transfer learning for 3d medical image analysis,” arXiv preprint arXiv:1904.00625, 2019.
[12] O. Oktay, J. Schlemper, L. L. Folgoc, et al., “Attention u-net: Learning where to look for the pancreas,” arXiv preprint arXiv:1804.03999, 2018.
[13] J. Chen, Y. Lu, Q. Yu, et al., “Transunet: Transformers make strong encoders for medical image segmentation,” arXiv preprint arXiv:2102.04306, 2021.
[14] H. R. Roth, H. Oda, X. Zhou, et al., “An application of cascaded 3d fully convolutional networks for medical image segmentation,” Computerized Medical Imaging and Graphics, vol. 66, pp. 90–99, 2018.
[15] Q. Yu, L. Xie, Y. Wang, Y. Zhou, E. K. Fishman, and A. L. Yuille, “Recurrent saliency transformation network: Incorporating multi-stage visual cues for small organ segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8280–8289.
[16] F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “Nnu-net: A self-configuring method for deep learning-based biomedical image segmentation,” Nature methods, vol. 18, no. 2, pp. 203–211, 2021.
[17] Digital Imaging and Communications in Medicine. “About DICOM: Overview,” [Online]. Available: https://www.dicomstandard.org/about (visited on 05/27/2024).
[18] LEAD Tools - The World Leader In Imaging SDKs. “Overview: Basic dicom file structure,” [Online]. Available: https://www.leadtools.com/help/sdk/v20/dicom/api/overview-basic-dicom-file-structure.html (visited on 05/27/2024).
[19] Digital Imaging and Communications in Medicine. “The data set,” [Online]. Available: https://dicom.nema.org/dicom/2013/output/chtml/part05/chapter_7.html (visited on 05/27/2024).
[20] Digital Imaging and Communications in Medicine. “Illustration of the overall directory organization,” [Online]. Available: https ://dicom.nema.org/medical/dicom/current/output/chtml/part03/sect_F.2.2.html#sect_F.2.2.1 (visited on 05/28/2024).
[21] S. S. Talathi, R. Zimmerman, and M. Young, “Anatomy, abdomen and pelvis, pancreas,”in StatPearls, Treasure Island (FL): StatPearls Publishing, 2023.
[22] M. Karpińska and M. Czauderna, “Pancreas-its functions, disorders, and physiological impact on the mammals’ organism,” Front. Physiol, vol. 13, p. 807 632, Mar. 2022.
[23] F. Campbell and C. S. Verbeke, “Embryology, anatomy, and histology,” in Pathology of the Pancreas: A Practical Approach. Cham: Springer International Publishing, 2021, pp. 3–23.
[24] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440
[25] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science, ser. Lecture notes in computer science, Cham: Springer International Publishing, 2015, pp. 234–241.
[26] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017.
[27] Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Work-shop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4, Springer, 2018, pp. 3–11.
[28] H. Huang, L. Lin, R. Tong, et al., “Unet 3+: A full-scale connected unet for medical image segmentation,” in ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, 2020, pp. 1055–1059.
[29] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D u-net: Learning dense volumetric segmentation from sparse annotation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, ser. Lecture notes in computer science, Cham: Springer International Publishing, 2016, pp. 424–432.
[30] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 fourth international conference on 3D vision (3DV), Ieee, 2016, pp. 565–571.
[31] P. Hu, X. Li, Y. Tian, et al., “Automatic pancreas segmentation in ct images with distance-based saliency-aware denseaspp network,” IEEE journal of biomedical and health informatics, vol. 25, no. 5, pp. 1601–1611, 2020.
[32] R. O. Dogan, H. Dogan, C. Bayrak, and T. Kayikcioglu, “A two-phase approach using mask r-cnn and 3d u-net for high-accuracy automatic segmentation of pancreas in ct imaging,” Computer Methods and Programs in Biomedicine, vol. 207, p. 106 141, 2021.
[33] S.-H. Lim, Y. J. Kim, Y.-H. Park, D. Kim, K. G. Kim, and D.-H. Lee, “Automated pancreas segmentation and volumetry using deep neural network on computed tomography,” Scientific Reports, vol. 12, no. 1, p. 4075, 2022.
[34] Y. Deng, L. Lan, L. You, et al., “Automated ct pancreas segmentation for acute pancre- atitis patients by combining a novel object detection approach and u-net,” Biomedical signal processing and control, vol. 81, p. 104 430, 2023.
[35] H. R. Roth, L. Lu, N. Lay, et al., “Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation,” Medical image analysis, vol. 45, pp. 94–107, 2018.
[36] T. D. DenOtter and J. Schubert, Hounsfield unit. Treasure Island (FL): StatPearls Publishing, Mar. 6, 2023.
[37] K. Greenway, R. Sharma, and V. C. D. “Hounsfield unit,” Radiopaedia. (Jul. 9, 2015), [Online]. Available: https : / / radiopaedia . org / articles / hounsfield - unit (visited on 05/07/2024).
[38] M. H. Lev and R. G. Gonzalez, “17 - ct angiography and ct perfusion imaging,” in Brain Mapping: The Methods (Second Edition), A. W. Toga and J. C. Mazziotta, Eds., Second Edition, San Diego: Academic Press, 2002, pp. 427–484.
[39] K. J. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics gems, 1994.
[40] A. Hatamizadeh, V. Nath, Y. Tang, D. Yang, H. R. Roth, and D. Xu, “Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images,” in International MICCAI Brainlesion Workshop, Springer, 2021, pp. 272–284.
[41] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3–19.
[42] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, pp. 1904–1916, 2015.
[43] P. Krähenbühl and V. Koltun, “Efficient inference in fully connected crfs with gaussian edge potentials,” CoRR, vol. abs/1210.5644, 2012.
[44] J. Lafferty, A. Mccallum, and F. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” Jan. 2001, pp. 282–289.
[45] B. A. Galler and M. J. Fisher, “An improved equivalence algorithm,” Commun. ACM, vol. 7, no. 5, pp. 301–303, May 1964.
[46] H. Roth, A. Farag, E. B. Turkbey, L. Lu, J. Liu, and R. M. Summers, Data from Pancreas-CT, 2016 |