摘要: | 本論文主要進行基於深度學習之皮膚病兆切割視覺處理方法之研究。根據皮膚癌基金會的數據,近年皮膚癌在美國每小時導致超過兩人死亡,每五個美國人中就有一個會患上這種疾病。由於惡性生長的皮膚癌之廣泛發生,因此對皮膚病預測的需求正在擴大,尤其是對於具有高轉移率的黑色素瘤。皮膚病兆切割是準確預測受損皮膚區域的重要步驟,也是提出合理治療方法的重要步驟。 基於上述原因,許多傳統方法或深度學習模型已運用在皮膚鏡圖像中,進行皮膚病兆的分割。儘管如此,在實際應用上,此課題仍然存在生物醫學圖像的一些普遍缺陷,例如,訓練圖像的不足,而且圖像質量因頭髮或模糊的噪聲而很低。這些都對此課造成了負面影響,包括準確性低、費用高、處理時間長等缺點。 本論文提出了抗鋸齒式空間注意力卷積模型 (AASC) 來分割皮膚鏡圖片中的黑色素瘤等皮膚病兆。這項研究解決了數量限制以及輸入圖像質量所造成的問題,其應用的系統可以改進當前的醫療物聯網 (MIoT) 應用,並為臨床檢查員提供相關提示。在我們所進行的實驗結果裡,AASC 模型在參數較少的模型條件下表現良好,因為它可以克服皮膚鏡限制,如濃密的頭髮、低分化或形狀和陰影扭曲。此外,該模型可以改善移位方差損失。我們運用了多項統計評估指標進行了模型效能的評估,包括Jaccard 指數、準確度、精確度、召回率、F1 分數和 Dice 係數。值得一提的是,在ISIC 2016、ISIC 2017 和 PH2三個常用數據集中,我們的模型與當前最好的結果相比,AASC 模型在兩個數據庫中的得分最高。 ;A visual processing system for skin lesion segmentation is presented in this study. Recently, skin cancer kills more than two people every hour in the United States, according to the Skin Cancer Foundation, and one out of every five Americans will develop this disease. Skin malignant growth is turning out to be more well known, so the requirement for skin disease prediction is expanding, especially for melanoma, which has a high metastasis rate. Skin lesion segmentation is the important step to prognosis exactly damaged skin area, as well as propose reasonable treatment methods. Thus, numerous customary calculations, as well as deep learning models have been executed in dermoscopic images for skin lesion segmentation. Nonetheless, some general drawbacks for biomedical images still exist in this topic. For instance, the number of the input image is insufficient as well as the quality of images, are low by noise from hair or blur. All of these affects negatively the previous works with the exactness low, high fees, and long processing time. This paper presents antialiasing attention spatial convolution model (AASC) to segment melanoma skin lesions in dermoscopic pictures. This model solves the big problem of quantity limitation as well as the quality of input images. The proposed model can be applied to the system that can improve the current Medical IoT (MIoT) applications and give related hints to clinical inspectors. Empirical results disclose that the AASC model performs well in model conditions with fewer parameters when it can defeat dermoscopic restrictions like thick hair, low differentiation, or shape and shading contortion. Furthermore, the model could improve the shift-variance loss. The performance of the model was assessed stringently under statistical evaluation metrics, for example, Jaccard index, Accuracy, Precision, Recall, F1 score, and Dice coefficient. Remarkably, the AASC model yielded the highest scores in both three databases compared with the state-of-the-art models across three datasets: ISIC 2016, ISIC 2017, and PH2. |