论文标题

域适应性符合零拍的学习:一种多模式医学图像分割的注释效率方法

Domain Adaptation Meets Zero-Shot Learning: An Annotation-Efficient Approach to Multi-Modality Medical Image Segmentation

论文作者

Bian, Cheng, Yuan, Chenglang, Ma, Kai, Yu, Shuang, Wei, Dong, Zheng, Yefeng

论文摘要

由于缺乏正确注释的医学数据,探索深层模型的概括能力正在成为公众关注的问题。近年来已经出现了零拍学习(ZSL),以使深层模型具有识别看不见的类别的能力。但是,现有研究主要集中于自然图像,该图像利用语言模型来提取ZSL的辅助信息。将自然图像ZSL解决方案直接应用于医学图像是不切实际的,因为医学术语非常特定于领域,并且为医学术语获得语言模型并不容易。在这项工作中,我们提出了一种专门用于使用跨模式信息的医学图像的ZSL的新范式。我们对拟议的范式做出了三个主要贡献。首先,我们从先前的模型中提取有关分割目标(称为关系原型的)的先验知识,然后提出了一个跨模式适应模块,将原型继承到零拍模型。其次,我们提出了一个关系原型意识模块,以使零拍模型了解原型中包含的信息。最后但并非最不重要的一点是,我们开发了一个继承注意模块,以重新校准关系原型以增强继承过程。在两个公共跨模式数据集上评估了所提出的框架,包括心脏数据集和一个腹部数据集。广泛的实验表明,所提出的框架大大优于艺术状态。

Due to the lack of properly annotated medical data, exploring the generalization capability of the deep model is becoming a public concern. Zero-shot learning (ZSL) has emerged in recent years to equip the deep model with the ability to recognize unseen classes. However, existing studies mainly focus on natural images, which utilize linguistic models to extract auxiliary information for ZSL. It is impractical to apply the natural image ZSL solutions directly to medical images, since the medical terminology is very domain-specific, and it is not easy to acquire linguistic models for the medical terminology. In this work, we propose a new paradigm of ZSL specifically for medical images utilizing cross-modality information. We make three main contributions with the proposed paradigm. First, we extract the prior knowledge about the segmentation targets, called relation prototypes, from the prior model and then propose a cross-modality adaptation module to inherit the prototypes to the zero-shot model. Second, we propose a relation prototype awareness module to make the zero-shot model aware of information contained in the prototypes. Last but not least, we develop an inheritance attention module to recalibrate the relation prototypes to enhance the inheritance process. The proposed framework is evaluated on two public cross-modality datasets including a cardiac dataset and an abdominal dataset. Extensive experiments show that the proposed framework significantly outperforms the state of the arts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源