论文标题

卡斯:跨建筑自学图像分析

CASS: Cross Architectural Self-Supervision for Medical Image Analysis

论文作者

Singh, Pranav, Sizikova, Elena, Cirrone, Jacopo

论文摘要

深度学习和计算机视觉的最新进展减少了自动化医学图像分析的许多障碍,从而使算法可以处理无标签的图像并提高性能。但是,现有技术具有极端的计算要求,并降低了批处理大小或训练时期的大量性能。本文介绍了交叉体系结构 - 自我监督(CASS),这是一种新颖的自我监督学习方法,同时利用了变压器和CNN。与现有的自我监督学习方法的现有状态相比,我们从经验上表明,在四种不同数据集的CASS培训的CNN和变压器平均获得了3.8%,具有1%标记的数据,5.9%,标记为10%的数据,而10.13%的数据为10.13%,而100%标记的数据却减少了69%。我们还表明,CASS对批处理大小和训练时期的变化更为强大。值得注意的是,其中一个测试数据集包括自身免疫性疾病的组织病理学幻灯片,这种疾病的数据最少,在医学成像中的人数不足。该代码是开源的,可在GitHub上找到。

Recent advances in deep learning and computer vision have reduced many barriers to automated medical image analysis, allowing algorithms to process label-free images and improve performance. However, existing techniques have extreme computational requirements and drop a lot of performance with a reduction in batch size or training epochs. This paper presents Cross Architectural - Self Supervision (CASS), a novel self-supervised learning approach that leverages Transformer and CNN simultaneously. Compared to the existing state of the art self-supervised learning approaches, we empirically show that CASS-trained CNNs and Transformers across four diverse datasets gained an average of 3.8% with 1% labeled data, 5.9% with 10% labeled data, and 10.13% with 100% labeled data while taking 69% less time. We also show that CASS is much more robust to changes in batch size and training epochs. Notably, one of the test datasets comprised histopathology slides of an autoimmune disease, a condition with minimal data that has been underrepresented in medical imaging. The code is open source and is available on GitHub.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源