论文标题

基于GAN的医疗图像通过两个阶段的级联框架检测小区域伪造框架

GAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework

论文作者

Zhang, Jianyi, Huang, Xuanxi, Liu, Yaqi, Han, Yuyang, Xiang, Zixiao

论文摘要

使用生成对抗网络(GAN)\ CITE {RN90}来增强医学图像的数据,这对许多计算机辅助诊断(CAD)任务非常有帮助。出现了一种名为CT-GAN的新攻击。它可以注入或去除肺癌病变进行CT扫描。由于篡改区域甚至可能占原始图像的1 \%,因此即使是最先进的方法也很难检测到这种篡改的痕迹。 本文提出了一个级联框架,以检测基于GAN的医疗图像小型伪造,例如CT-GAN。在局部检测阶段,我们使用小的子图像训练检测器网络,因此真实区域中的干扰信息不会影响检测器。我们使用深度可分离的卷积和残留物来防止检测器过度拟合,并增强通过注意机制找到锻造区域的能力。同一图像中所有子图像的检测结果将合并为热图。在全球分类阶段,使用灰度级别共发生矩阵(GLCM)可以更好地提取热图的特征。由于篡改区域的形状和大小不确定,因此我们训练PCA和SVM方法进行分类。我们的方法可以对CT图像进行分类是否已被篡改并定位篡改位置。足够的实验表明我们的方法可以实现出色的性能。

Using generative adversarial network (GAN)\cite{RN90} for data enhancement of medical images is significantly helpful for many computer-aided diagnosis (CAD) tasks. A new attack called CT-GAN has emerged. It can inject or remove lung cancer lesions to CT scans. Because the tampering region may even account for less than 1\% of the original image, even state-of-the-art methods are challenging to detect the traces of such tampering. This paper proposes a cascade framework to detect GAN-based medical image small region forgery like CT-GAN. In the local detection stage, we train the detector network with small sub-images so that interference information in authentic regions will not affect the detector. We use depthwise separable convolution and residual to prevent the detector from over-fitting and enhance the ability to find forged regions through the attention mechanism. The detection results of all sub-images in the same image will be combined into a heatmap. In the global classification stage, using gray level co-occurrence matrix (GLCM) can better extract features of the heatmap. Because the shape and size of the tampered area are uncertain, we train PCA and SVM methods for classification. Our method can classify whether a CT image has been tampered and locate the tampered position. Sufficient experiments show that our method can achieve excellent performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源