论文标题

深入研究多标签胸部疾病分类的蒙面自动编码器

Delving into Masked Autoencoders for Multi-Label Thorax Disease Classification

论文作者

Xiao, Junfei, Bai, Yutong, Yuille, Alan, Zhou, Zongwei

论文摘要

Vision Transformer(VIT)由于其出色的可扩展性,计算效率和在许多视觉任务中引人注目的性能而成为最流行的神经体系结构之一。但是,由于其渴望数据的性质和缺乏注释的医学数据,VIT对卷积神经网络(CNN)的表现较低。在本文中,我们使用蒙版自动编码器(MAE)预先进行训练,并在266,340张胸部X射线上进行预测,从而从每个图像的一小部分重建缺失的像素。为了进行比较,使用先进的自我监督方法(例如MOCO V2),在相同的266,340 X射线上也预先训练CNN。结果表明,我们的预训练的VIT与最先进的CNN(Densenet-121)相当(有时更好)进行多标签胸部疾病分类。这种表现归因于我们从我们的实证研究中提取的强大食谱,用于培训和微调VIT。预训练的食谱表明,与天然成像相比,医学重建需要的图像需要少得多的图像(10%比25%)和更中等随机的作物范围(0.5〜1.0 vs. 0.2〜1.0)。此外,我们指出,只要可能,就首选内域转移学习。微调食谱揭示了层的LR衰减,Randaug幅度和DropPath率是要考虑的重要因素。我们希望这项研究能够指导将未来的有关变形金刚应用于各种医学成像任务的研究。

Vision Transformer (ViT) has become one of the most popular neural architectures due to its great scalability, computational efficiency, and compelling performance in many vision tasks. However, ViT has shown inferior performance to Convolutional Neural Network (CNN) on medical tasks due to its data-hungry nature and the lack of annotated medical data. In this paper, we pre-train ViTs on 266,340 chest X-rays using Masked Autoencoders (MAE) which reconstruct missing pixels from a small part of each image. For comparison, CNNs are also pre-trained on the same 266,340 X-rays using advanced self-supervised methods (e.g., MoCo v2). The results show that our pre-trained ViT performs comparably (sometimes better) to the state-of-the-art CNN (DenseNet-121) for multi-label thorax disease classification. This performance is attributed to the strong recipes extracted from our empirical studies for pre-training and fine-tuning ViT. The pre-training recipe signifies that medical reconstruction requires a much smaller proportion of an image (10% vs. 25%) and a more moderate random resized crop range (0.5~1.0 vs. 0.2~1.0) compared with natural imaging. Furthermore, we remark that in-domain transfer learning is preferred whenever possible. The fine-tuning recipe discloses that layer-wise LR decay, RandAug magnitude, and DropPath rate are significant factors to consider. We hope that this study can direct future research on the application of Transformers to a larger variety of medical imaging tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源