论文标题

多模式心脏图像分割的几个无监督域的适应

Few-shot Unsupervised Domain Adaptation for Multi-modal Cardiac Image Segmentation

论文作者

Gu, Mingxuan, Vesal, Sulaiman, Kosti, Ronak, Maier, Andreas

论文摘要

无监督的域适应(UDA)方法打算通过使用未标记的目标域和标记的源域数据来减少源和目标域之间的差距,但是,在医疗域中,目标域数据可能并不总是很容易获得,并且获取新样本通常是在耗时的。这限制了新领域的UDA方法的开发。在本文中,我们在更具挑战性的情况下探讨了UDA的潜力,而现实的情况只有一个未标记的目标患者样本可用。我们称之为几乎没有监督的域适应性(FUDA)。我们首先从源图像中生成目标风格的图像,并探索具有随机自适应实例标准化(RAIN)的单个目标患者的多种目标样式。然后,通过生成的目标图像以监督的方式对细分网络进行训练。我们的实验表明,与基线相比,FUDA在目标域上的分割性能提高了0.33的骰子分数,并且在更严格的单杆设置中也可以提高骰子得分的0.28。我们的代码可在\ url {https://github.com/mingxuangu/few-shot-uda}上找到。

Unsupervised domain adaptation (UDA) methods intend to reduce the gap between source and target domains by using unlabeled target domain and labeled source domain data, however, in the medical domain, target domain data may not always be easily available, and acquiring new samples is generally time-consuming. This restricts the development of UDA methods for new domains. In this paper, we explore the potential of UDA in a more challenging while realistic scenario where only one unlabeled target patient sample is available. We call it Few-shot Unsupervised Domain adaptation (FUDA). We first generate target-style images from source images and explore diverse target styles from a single target patient with Random Adaptive Instance Normalization (RAIN). Then, a segmentation network is trained in a supervised manner with the generated target images. Our experiments demonstrate that FUDA improves the segmentation performance by 0.33 of Dice score on the target domain compared with the baseline, and it also gives 0.28 of Dice score improvement in a more rigorous one-shot setting. Our code is available at \url{https://github.com/MingxuanGu/Few-shot-UDA}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源