深度學習已成為電腦視覺領域中高效且強大的工具,在醫學領域有重要的應用。本論文的重點在於介紹新穎的深度學習方法,用於心電圖(ECG)分類和皮膚病變分割。 在心電圖分類方面,我們徹底研究了一個新模型,其表現優於現有方法。令人印象深刻的是,它在PhysioNet數據集MIT-BIH上達到了98.5%的準確率,在PTB數據庫上達到了98.28%的準確率,並在PhysioNet Challenge 2017數據集上達到了約86.71%的F1分數。 在皮膚癌預測方面,傳統的皮膚科醫生視覺檢查面臨著準確率低、耗時長和依賴於人為因素和專業知識等限制。為了克服這些挑戰,我提出了一種新的輕量級分割模型,名為移動抗鋸齒注意力U-Net。實驗結果表明,對2018年皮膚影像合作(ISIC)和PH2數據集的我們的方法不僅需要更少的參數,而且在性能上優於幾種領先的分割方法。 總的來說,我們的深度學習方法在醫學應用中表現優越,由於其高效的推理能力,具有實際應用在真實設備上的潛力。本研究的成果有助於推動該領域的進步,為進一步研究和應用這些方法在醫學領域奠定基礎。 ;Deep Learning has emerged as a highly efficient and powerful tool in the field of computer vision, with significant applications in the medical domain. This dissertation focuses on presenting novel Deep Learning approaches for electrocardiogram (ECG) classification and skin lesion segmentation. For ECG classification, a new model was thoroughly investigated that outperforms existing methods. Impressively, it achieved an accuracy of 98.5% on the PhysioNet dataset MIT-BIH, 98.28% on the PTB database, and an F1 score of approximately 86.71% on the PhysioNet Challenge 2017 dataset. In the context of skin cancer prediction, the conventional approach of visual examination by dermatologists faces its limitations, such as low accuracy, extensive time consumption, and reliance on human factors and expertise. To overcome these challenges, we propose a novel lightweight segmentation model, mobile anti-aliasing attention U-Net. Experimental results on the 2018 Skin Image Collaboration (ISIC) and PH2 datasets demonstrate that our approach not only requires fewer parameters, but also outperforms several state-of-the-art segmentation methods. In summary, our Deep Learning approaches exhibit superiority in medical applications and hold the potential for practical implementation in real devices due to their efficient inference capabilities. The outcomes of this research contribute to the progress in this field and pave the way for further research and application of these methods in medical settings.