dc.description.abstract | A visual processing system for skin lesion segmentation is presented in this study. Recently, skin cancer kills more than two people every hour in the United States, according to the Skin Cancer Foundation, and one out of every five Americans will develop this disease. Skin malignant growth is turning out to be more well known, so the requirement for skin disease prediction is expanding, especially for melanoma, which has a high metastasis rate. Skin lesion segmentation is the important step to prognosis exactly damaged skin area, as well as propose reasonable treatment methods.
Thus, numerous customary calculations, as well as deep learning models have been executed in dermoscopic images for skin lesion segmentation. Nonetheless, some general drawbacks for biomedical images still exist in this topic. For instance, the number of the input image is insufficient as well as the quality of images, are low by noise from hair or blur. All of these affects negatively the previous works with the exactness low, high fees, and long processing time.
This paper presents antialiasing attention spatial convolution model (AASC) to segment melanoma skin lesions in dermoscopic pictures. This model solves the big problem of quantity limitation as well as the quality of input images. The proposed model can be applied to the system that can improve the current Medical IoT (MIoT) applications and give related hints to clinical inspectors. Empirical results disclose that the AASC model performs well in model conditions with fewer parameters when it can defeat dermoscopic restrictions like thick hair, low differentiation, or shape and shading contortion. Furthermore, the model could improve the shift-variance loss. The performance of the model was assessed stringently under statistical evaluation metrics, for example, Jaccard index, Accuracy, Precision, Recall, F1 score, and Dice coefficient. Remarkably, the AASC model yielded the highest scores in both three databases compared with the state-of-the-art models across three datasets: ISIC 2016, ISIC 2017, and PH2. | en_US |