论文标题
后门攻击是基于联合GAN的医疗图像合成中的魔鬼
Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis
论文作者
论文摘要
基于深度学习的图像合成技术已在医疗研究中应用,用于生成医学图像以支持开放研究。培训生成的对抗神经网络(GAN)通常需要大量的培训数据。联合学习(FL)提供了一种使用来自不同医疗机构的分布数据的中央模型训练中心模型的方法,同时在本地保留原始数据。但是,FL容易受到后门攻击的攻击,这是通过中毒训练数据的对抗性攻击,因为中央服务器无法直接访问原始数据。大多数后门攻击策略都集中在分类模型和集中域。在这项研究中,我们提出了一种通过在后门攻击分类模型中使用常用的数据中毒策略来治疗歧视者来攻击联邦GAN(FEDGAN)的方法。我们证明,添加一个小触发器,其尺寸小于原始图像尺寸的0.5%会破坏FL-GAN模型。根据拟议的攻击,我们提供了两种有效的防御策略:全球恶意检测和当地培训正规化。我们表明,将两种防御策略结合起来会产生强大的医疗形象。
Deep Learning-based image synthesis techniques have been applied in healthcare research for generating medical images to support open research. Training generative adversarial neural networks (GAN) usually requires large amounts of training data. Federated learning (FL) provides a way of training a central model using distributed data from different medical institutions while keeping raw data locally. However, FL is vulnerable to backdoor attack, an adversarial by poisoning training data, given the central server cannot access the original data directly. Most backdoor attack strategies focus on classification models and centralized domains. In this study, we propose a way of attacking federated GAN (FedGAN) by treating the discriminator with a commonly used data poisoning strategy in backdoor attack classification models. We demonstrate that adding a small trigger with size less than 0.5 percent of the original image size can corrupt the FL-GAN model. Based on the proposed attack, we provide two effective defense strategies: global malicious detection and local training regularization. We show that combining the two defense strategies yields a robust medical image generation.