Investigating the Robustness of Vision Transformers against Label Noise in Medical Image Classification
CoRR(2024)
摘要
Label noise in medical image classification datasets significantly hampers
the training of supervised deep learning methods, undermining their
generalizability. The test performance of a model tends to decrease as the
label noise rate increases. Over recent years, several methods have been
proposed to mitigate the impact of label noise in medical image classification
and enhance the robustness of the model. Predominantly, these works have
employed CNN-based architectures as the backbone of their classifiers for
feature extraction. However, in recent years, Vision Transformer (ViT)-based
backbones have replaced CNNs, demonstrating improved performance and a greater
ability to learn more generalizable features, especially when the dataset is
large. Nevertheless, no prior work has rigorously investigated how
transformer-based backbones handle the impact of label noise in medical image
classification. In this paper, we investigate the architectural robustness of
ViT against label noise and compare it to that of CNNs. We use two medical
image classification datasets – COVID-DU-Ex, and NCT-CRC-HE-100K – both
corrupted by injecting label noise at various rates. Additionally, we show that
pretraining is crucial for ensuring ViT's improved robustness against label
noise in supervised training.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要