博碩士論文 108886601 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:117 、訪客IP:18.118.1.158
姓名 黎氏芳(Le Thi Phuong)  查詢紙本館藏   畢業系所 生醫科學與工程學系
論文名稱 基於深度學習之皮膚病兆切割之研究
(A Study on Deep Learning Based Skin Lesion Segmentation)
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 本論文主要進行基於深度學習之皮膚病兆切割視覺處理方法之研究。根據皮膚癌基金會的數據,近年皮膚癌在美國每小時導致超過兩人死亡,每五個美國人中就有一個會患上這種疾病。由於惡性生長的皮膚癌之廣泛發生,因此對皮膚病預測的需求正在擴大,尤其是對於具有高轉移率的黑色素瘤。皮膚病兆切割是準確預測受損皮膚區域的重要步驟,也是提出合理治療方法的重要步驟。
基於上述原因,許多傳統方法或深度學習模型已運用在皮膚鏡圖像中,進行皮膚病兆的分割。儘管如此,在實際應用上,此課題仍然存在生物醫學圖像的一些普遍缺陷,例如,訓練圖像的不足,而且圖像質量因頭髮或模糊的噪聲而很低。這些都對此課造成了負面影響,包括準確性低、費用高、處理時間長等缺點。
本論文提出了抗鋸齒式空間注意力卷積模型 (AASC) 來分割皮膚鏡圖片中的黑色素瘤等皮膚病兆。這項研究解決了數量限制以及輸入圖像質量所造成的問題,其應用的系統可以改進當前的醫療物聯網 (MIoT) 應用,並為臨床檢查員提供相關提示。在我們所進行的實驗結果裡,AASC 模型在參數較少的模型條件下表現良好,因為它可以克服皮膚鏡限制,如濃密的頭髮、低分化或形狀和陰影扭曲。此外,該模型可以改善移位方差損失。我們運用了多項統計評估指標進行了模型效能的評估,包括Jaccard 指數、準確度、精確度、召回率、F1 分數和 Dice 係數。值得一提的是,在ISIC 2016、ISIC 2017 和 PH2三個常用數據集中,我們的模型與當前最好的結果相比,AASC 模型在兩個數據庫中的得分最高。
摘要(英) A visual processing system for skin lesion segmentation is presented in this study. Recently, skin cancer kills more than two people every hour in the United States, according to the Skin Cancer Foundation, and one out of every five Americans will develop this disease. Skin malignant growth is turning out to be more well known, so the requirement for skin disease prediction is expanding, especially for melanoma, which has a high metastasis rate. Skin lesion segmentation is the important step to prognosis exactly damaged skin area, as well as propose reasonable treatment methods.
Thus, numerous customary calculations, as well as deep learning models have been executed in dermoscopic images for skin lesion segmentation. Nonetheless, some general drawbacks for biomedical images still exist in this topic. For instance, the number of the input image is insufficient as well as the quality of images, are low by noise from hair or blur. All of these affects negatively the previous works with the exactness low, high fees, and long processing time.
This paper presents antialiasing attention spatial convolution model (AASC) to segment melanoma skin lesions in dermoscopic pictures. This model solves the big problem of quantity limitation as well as the quality of input images. The proposed model can be applied to the system that can improve the current Medical IoT (MIoT) applications and give related hints to clinical inspectors. Empirical results disclose that the AASC model performs well in model conditions with fewer parameters when it can defeat dermoscopic restrictions like thick hair, low differentiation, or shape and shading contortion. Furthermore, the model could improve the shift-variance loss. The performance of the model was assessed stringently under statistical evaluation metrics, for example, Jaccard index, Accuracy, Precision, Recall, F1 score, and Dice coefficient. Remarkably, the AASC model yielded the highest scores in both three databases compared with the state-of-the-art models across three datasets: ISIC 2016, ISIC 2017, and PH2.
關鍵字(中) ★ 皮膚病兆切割
★ 皮膚癌
★ 深度學習
★ 注意力卷積模型
★ 醫療物聯網
關鍵字(英) ★ Skin lesion segmentation
★ Skin Cancer
★ Deep Learning
★ Attention Spatial Convolution
★ Medical Internet of Things
論文目次 基於深度學習之皮膚病兆切割之研究 1
摘要 1
Abstract 2
Contents 3
List of Symbols and Abbreviations 5
List of Figures 7
List of Tables 9
CHAPTER 1 1
1.1 Motivation 1
1.2 Research Problem 2
1.3 Research Objective 2
Chapter 2 Related Work 3
Chapter 3 Theoretical Basis 3
Chapter 4 Proposal method 3
Chapter 5 Experiment and result 3
Chapter 6 Conclusion and Future Work 3
CHAPTER 2 4
2.1 Deep learning 4
2.1.1 Artificial intelligence 4
2.1.2 Machine learning 6
2.1.3 Deep learning 7
2.2 Skin segmentation 13
CHAPTER 3 15
3.1 Traditional methods 15
3.2 Deep learning methods 15
Chapter 4 22
Proposed method 22
4.1 Datasets 22
4.2 Proposed model 23
4.2.1 Pre-processing method 23
4.2.2 Proposed model 24
4.3 Evaluation method 28
Chapter 5 29
Experiments and results 29
5.1 Experiment setting 29
5.2 Result 31
5.2.1 The comparison about training and testing results of the proposed method 31
5.2.2 Comparative experiment in the ISIC 2016 dataset 32
5.2.3 Comparative experiment in the ISIC 2017 dataset 33
5.2.4 Comparative experiment in the PH2 dataset 34
Bibliographies 37




參考文獻 [1] Gansler, T., Ganz, P.A., Grant, M., Greene, F.L., Johnstone, P., Mahoney, M., Newman, L.A., Oh, W.K., Thomas Jr, C.R., Thun, M.J. and Vickers, A.J., 2010. "Sixty years of CA: a cancer journal for clinicians." CA: a cancer journal for clinicians, 60(6), pp.345-350.
[2] Weir, H.K., Thompson, T.D., Soman, A., Møller, B. and Leadbetter, S., 2015. "The past, present, and future of cancer incidence in the United States: 1975 through 2020." Cancer, 121(11), pp.1827-1837.
[3] Jordan, M.I. and Mitchell, T.M., 2015. "Machine learning: Trends, perspectives, and prospects." Science, 349(6245), pp.255-260.
[4] Ongsulee, P., 2017, November. "Artificial intelligence, machine learning and deep learning." In 2017 15th International Conference on ICT and Knowledge Engineering (ICT&KE)(pp. 1-6). IEEE.
[5] Forsyth, D. and Ponce, J., 2011. Computer vision: A modern approach (p. 792). Prentice hall.
[6] Reddy, D.R., 1976. "Speech recognition by machine: A review." Proceedings of the IEEE, 64(4), pp.501-531.
[7] Baldi, P. and Brunak, S., 2001. Bioinformatics: the machine learning approach. MIT press.
[8] Wei, W.W., 2006. Time series analysis. In The Oxford Handbook of Quantitative Methods in Psychology: Vol. 2.
[9] Leonard, J., Buss, A., Gamboa, R., Mitchell, M., Fashola, O.S., Hubert, T. and Almughyirah, S., 2016. Using robotics and game design to enhance children’s self-efficacy, STEM attitudes, and computational thinking skills. Journal of Science Education and Technology, 25(6), pp.860-876.
[10] LeCun, Y., Bengio, Y. and Hinton, G., 2015. “Deep learning.” nature, 521(7553), pp.436-444.
[11] Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. “Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems, 25.
[12] Farabet, C., Couprie, C., Najman, L. and LeCun, Y., 2012. “Learning hierarchical features for scene labeling.” IEEE transactions on pattern analysis and machine intelligence, 35(8), pp.1915-1929.
[13] Tompson, J.J., Jain, A., LeCun, Y. and Bregler, C., 2014. “Joint training of a convolutional network and a graphical model for human pose estimation.” Advances in neural information processing systems, 27.
[14] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A., 2015. “Going deeper with convolutions.” In Proceedings of the IEEE conference on computer vision and pattern recognition(pp. 1-9).
[15] Chowdhary, K., 2020. “Natural language processing.” Fundamentals of artificial intelligence, pp.603-649.
[16] Jean, S., Cho, K., Memisevic, R. and Bengio, Y., 2014. “On using very large target vocabulary for neural machine translation.” arXiv preprint arXiv:1412.2007.
[17] Sutskever, I., Vinyals, O. and Le, Q.V., 2014. “Sequence to sequence learning with neural networks.” Advances in neural information processing systems, 27.
[18] Miotto, R., Wang, F., Wang, S., Jiang, X. and Dudley, J.T., 2018. “Deep learning for healthcare: review, opportunities and challenges.” Briefings in bioinformatics, 19(6), pp.1236-1246.
[19] Gao, R. and Grauman, K., 2017. “On-demand learning for deep image restoration.” In Proceedings of the IEEE international conference on computer vision (pp. 1086-1095).
[20] Honneth, Axel, and Avishai Margalit. "Recognition." Proceedings of the Aristotelian society, supplementary volumes 75 (2001): 111-139.
[21] Zhao, Z.Q., Zheng, P., Xu, S.T. and Wu, X., 2019. “Object detection with deep learning: A review.” IEEE transactions on neural networks and learning systems, 30(11), pp.3212-3232.
[22] Minaee, S., Boykov, Y.Y., Porikli, F., Plaza, A.J., Kehtarnavaz, N. and Terzopoulos, D., 2021. “Image segmentation using deep learning: A survey.” IEEE transactions on pattern analysis and machine intelligence.
[23] B. Halalli and A. Makandar, "Computer Aided Diagnosis-Medical Image Analysis Techniques," in Breast Image London, United Kingdom: IntechOpen, 2018, pp. 85–109.
[24] B.-W. Chen and W.-C. Ye, “Low-error data recovery based on collaborative filtering with nonlinear inequality constraints for manufacturing processes,” IEEE Transactions on Automation Science and Engineering, vol. 18, no. 4, pp. 1602–1614, Aug. 2020.
[25] B.-W. Chen, K.-L. Hou, P.-H. Wu, W.-C. Ye, and J.-Y. Huang, “Heterogeneous multiview crowdsensing based on half quadratic optimization for the visual internet of things,” IEEE Wireless Communications, vol. 28, no. 4, pp. 19–25, Sep. 2021.
[26] B.-W. Chen, “Novel kernel orthogonal partial least squares for dominant sensor data extraction,” IEEE Access, vol. 8, pp. 36131–36139, Feb. 2020.
[27] Piccolo, D., et al., “Dermoscopic diagnosis by a trained clinician vs. a clinician with minimal dermoscopy training vs. computer‐aided diagnosis of 341 pigmented skin lesions: A comparative study,” British Journal of Dermatology, vol. 147, no. 3, pp. 481–486, Sep. 2002.
[28] Topiwala, A., Al-Zogbi, L., Fleiter, T. and Krieger, A., 2019, October. “Adaptation and evaluation of deep learning techniques for skin segmentation on novel abdominal dataset.” In 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE) (pp. 752-759). IEEE.
[29] Liu, L., Mou, L., Zhu, X.X. and Mandal, M., 2020. “Automatic skin lesion classification based on mid-level feature learning.” Computerized Medical Imaging and Graphics, 84, p.101765.
[30] Li, Y. and Shen, L., 2018. “Skin lesion analysis towards melanoma detection using deep learning network.” Sensors, 18(2), p.556.
[31] Singh, V.K., Abdel-Nasser, M., Rashwan, H.A., Akram, F., Pandey, N., Lalande, A., Presles, B., Romani, S. and Puig, D., 2019. “FCA-net: Adversarial learning for skin lesion segmentation based on multi-scale features and factorized channel attention.” IEEE Access, 7, pp.130552-130565.
[32] Yang, X., Zeng, Z., Yeo, S.Y., Tan, C., Tey, H.L. and Su, Y., 2017. “A novel multi-task deep learning model for skin lesion segmentation and classification.” arXiv preprint arXiv:1703.01025.
[33] Xie, Y., Zhang, J., Xia, Y. and Shen, C., 2020. “A mutual bootstrapping model for automated skin lesion segmentation and classification.” IEEE transactions on medical imaging, 39(7), pp.2482-2493.
[34] R. Margolin, A. Tal, and L. Zelnik-Manor, “What makes a patch distinct?” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1139–1146, Portland, Oregon, USA, June 2013.
[35] O. O. Olugbara, T. B. Taiwo, and D. Heukelman, “Segmen-tation of melanoma skin lesion using perceptual color dif-ference saliency with morphological analysis,” Mathematical Problems in Engineering, vol. 2018, pp. 1–19, 2018.
[36] B. Jiang, L. Zhang, H. Lu, C. Yang, and M. Yang, “Saliency detection via absorbing Markov chain,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1665–1672, Sydney, New South Wales, Australia, December 2013.
[37] Z. Qu and L. Zhang, “Research on image segmentation based on the improved Otsu algorithm,” in Proceedings of the Second International Conference on Intelligent Human-Machine Systems and Cybernetics, pp. 228–231, Nanjing, China, August 2010.
[38] R. Melli, C. Grana, and R. Cucchiara, “Comparison of color clustering algorithms for segmentation of dermatological images,” SPIE Medical Imaging, vol. 6144, Article ID 61443S, 2006.
[39] H. Castillejos, V. Ponomaryov, L. Nino-de-Rivera, and V. Golikov, “Wavelet transform fuzzy algorithms for der-moscopic image segmentation,” Computational and Mathe-matical Methods in Medicine, vol. 2012, Article ID 578721, 2012.
[40] M. Goyal, A. Oakley, P. Bansal, D. Dancey, and M. H. Yap, “Skin lesion segmentation in dermoscopic images with en-semble deep learning methods,” IEEE Access, vol. 8, pp. 4171–4181, 2019.
[41] F. Guth and T. E. deCampos, “Skin lesion segmentation using U-Net and good training strategies,” 2018.
[42] J. Zhuang, “LadderNet: multi-path networks based on U-Net for medical image segmentation,” 2018.
[43] C. Kaul, S. Manandhar, and N. Pears, “Focusnet: an attention-based fully convolutional network for medical image seg-mentation,” in Proceedings of the IEEE International Sym-posium on Biomedical Imaging, pp. 455–458, Venice, Italy,April 2019.
[44] K. Zafar, S. O. Gilani, A. Waris et al., “Skin lesion segmen-tation from dermoscopic images using convolutional neural network,” Sensors, vol. 20, no. 6, p. 1601, 2020.
[45] J. W. Johnson, “Adapting mask-rcnn for automatic nucleus segmentation,” 2018.
[46] Y. Wang, “Skin lesion segmentation using atrous convolution via DeepLab V3,” 2018.
[47] Lazo, C., 2021. “Segmentation of skin lesions and their attributes using Generative Adversarial Networks.” arXiv preprint arXiv:2102.00169.
[48] Vahadane, A., Peng, T., Sethi, A., Albarqouni, S., Wang, L., Baust, M., Steiger, K., Schlitter, A.M., Esposito, I. and Navab, N., 2016. “Structure-preserving color normalization and sparse stain separation for histological images.” IEEE transactions on medical imaging, 35(8), pp.1962-1971.
[49] Ninh, Q.C., Tran, T.T., Tran, T.T., Tran, T.A.X. and Pham, V.T., 2019, December. “Skin lesion segmentation based on modification of SegNet neural networks.” In 2019 6th NAFOSTED Conference on Information and Computer Science (NICS) (pp. 575-578). IEEE.
[50] Brodin, R., Busaranuvong, P. and Ngan, C.K., 2022. “AutoCNN-MSCD: An Autodesigned CNN Framework for Detecting Multi-skin Cancer Diseases over Dermoscopic Images.”
[51] Burdick, J., Marques, O., Weinthal, J. and Furht, B., 2018. “Rethinking skin lesion segmentation in a convolutional classifier.” Journal of digital imaging, 31(4), pp.435-440.
[52] Alom, M.Z., Hasan, M., Yakopcic, C., Taha, T.M. and Asari, V.K., 2018. “Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation.” arXiv preprint arXiv:1802.06955.
[53] Al-Masni, M.A., Kim, D.H. and Kim, T.S., 2020. “Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification.” Computer methods and programs in biomedicine, 190, p.105351.
[54] Matos, C.E., “Evaluation of Encoder-Decoder Architectures for Automatic Skin Lesion Segmentation.” In Artificial Intelligence in Medicine: 19th International Conference on Artificial Intelligence in Medicine, AIME 2021, Virtual Event, June 15-18, 2021, Proceedings (Vol. 12721, p. 373). Springer Nature.
[55] N. C. D Gutman, E Celebi, B Helba, M Marchetti, N Mishra, and et al., “Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC),” arXiv:1605.01397, May 2016.
[56] Gutman, David, et al., “Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic),” in Proc. International Symposium on Biomedical Imaging, Washington, District of Columbia, United States, 2018, Apr. 04–07, pp. 168–172.
[57] T. Mendonça, P. M. Ferreira, J. S. Marques, A. R. S. Marcal, and J. Rozeira, “PH2 - A dermoscopic image database for research and benchmarking,” in Proc. IEEE Engineering in Medicine and Biology Society, Osaka, Japan, 2013, Jul. 03–07, pp. 5437–5440.
[58] E. S. Gedraite and M. Hadad, “Investigation on the effect of a Gaussian Blur in image filtering and segmentation,” in Proc. International Symposium on Electronics in Marine, Zadar, Croatia, 2011, Sep. 14–16, pp. 393–396.
[59] M. Nixon and A. Aguado, “Feature Extraction and Image Processing for Computer Vision”. Cambridge, Massachusetts, United States: Academic press, 2019.
[60] R. R. Blaser and P. Fryzlewicz, “Random rotation ensembles,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 126–151, Jan. 2016.
[61] R. Zhang, “Making convolutional networks shift-invariant again,” in Proc. International Conference on Machine Learning, Long Beach, California, United States, 2019, Jun. 09–15, pp. 7324–7334.
[62] C. Goutte and E. Gaussier, “A probabilistic interpretation of precision, recall and F-score, with implication for evaluation,” in Proc. European Conference on Information Retrieval, Santiago de Compostela, Spain, 2005, Mar. 21–23, pp. 345–359.
[63] J. Bertels, T. Eelbode, M. Berman, D. Vandermeulen, F. Maes, R. Bisschops, and M. B. Blaschko, “Optimizing the dice score and jaccard index for medical image segmentation: Theory and practice,” in Proc. Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 2019, Oct. 13–17, pp. 92–100.
[64] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015, Oct. 05–09, pp. 234–241.
[65] Oktay, Ozan, et al., “Attention U-Net: Learning where to look for the pancreas,” arXiv:1804.03999, May 2018.
[66] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested U-Net architecture for medical image segmentation,” in Proc. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Granada, Spain, 2018, Sep. 20, pp. 03–11.
[67] M. Z. Alom, M. Hasan, C. Yakopcic, T. M. Taha, and V. K. Asari, “Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation,” arXiv:1802.06955, May 2018.
指導教授 許藝瓊 王家慶(Yi-Chiung Hsu Jia-Ching Wang) 審核日期 2022-5-31
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明