總的來說,我們的研究表明SSL預訓練能夠在複雜的醫療影像分類任務中為模型提升分類效果以及強健性。;Supervised learning has been proven to achieve magnificent performance with abundant labeled data, but often encounter overfitting problem in medical image classification due to the scarcity and the inaccuracy of the labeled data, hindering its development on real-life application.
To address the problem, this thesis aims to investigate the integration of multi-task learning scheme and self-supervised learning (SSL) pretraining. The core idea is to leverage SSL pretraining by encouraging the model to capture more subtle yet crucial features within medical images, thereby mitigate the affect of overfitting while improve the performance during the fine-tuning in the downstream task. Specifically, we pretrain the model by a wide variety of SSL method, and evaluate them on a multi-class multi-output abdominal trauma detection task.
Our experiment results demonstrate the fact that SSL and SCL pretraining could alleviate the overfitting problem occurs commonly in supervised learning. Moreover, it also yields modest improvement across several metrics. Further analysis, through comparing the experiments results with different feature extractor components, we reveal the image feature extractor is the major contributor to these gains. Lastly, we discover the potential of SCL to reinforce the robustness of a model by switch the backbone model of the feature extractor, providing insights into breaking the model robustness bottleneck.
In conclusion, our research suggest that SSL pretraining could improve the classification performance and the robustness of a model for complicate medical image classification task.