论文标题

ffusioncgan:使用条件gan在细胞病理学数字幻灯片中使用条件gan的端到端融合方法

FFusionCGAN: An end-to-end fusion method for few-focus images using conditional GAN in cytopathological digital slides

论文作者

Geng, Xiebo, Liua, Sibo, Han, Wei, Li, Xu, Ma, Jiabo, Yu, Jingya, Liu, Xiuli, Zeng, Sahoqun, Chen, Li, Cheng, Shenghua

论文摘要

多聚焦图像融合技术将不同的焦点深度图像压缩到大多数对象焦点的图像中。但是,尽管现有的图像融合技术(包括传统算法和基于深度学习的算法)可以生成高质量的融合图像,但它们需要在同一视野中具有不同焦点深度的多个图像。在某些情况下,如果需要时间效率或硬件不足以满足此标准。这个问题在整个幻灯片图像中尤为突出。本文着重于细胞病理数字幻灯片图像的多聚焦图像融合,并提出了一种基于条件生成的对抗网络(GAN)的单对象或几个对位图像产生融合图像的新方法。通过对生成器和歧视器的对抗性学习,该方法能够生成具有清晰纹理和较大景深的融合图像。结合细胞病理图像的特征,本文设计了一种结合U-NET和密度块的新生成器结构,可以有效地改善网络的接受场并全面编码图像特征。同时,本文开发了一个语义分割网络,该网络识别细胞病理学图像中的模糊区域。通过将网络集成到生成模型中,生成的融合图像的质量得到了有效提高。我们的方法只能从单重点或几个对象图像中生成融合的图像,从而避免了以增加时间和硬件成本收集不同焦点深度的多个图像的问题。此外,我们的模型旨在学习将输入源图像直接映射到融合图像的直接映射,而无需像传统方法一样手动设计复杂的活动水平测量和融合规则。

Multi-focus image fusion technologies compress different focus depth images into an image in which most objects are in focus. However, although existing image fusion techniques, including traditional algorithms and deep learning-based algorithms, can generate high-quality fused images, they need multiple images with different focus depths in the same field of view. This criterion may not be met in some cases where time efficiency is required or the hardware is insufficient. The problem is especially prominent in large-size whole slide images. This paper focused on the multi-focus image fusion of cytopathological digital slide images, and proposed a novel method for generating fused images from single-focus or few-focus images based on conditional generative adversarial network (GAN). Through the adversarial learning of the generator and discriminator, the method is capable of generating fused images with clear textures and large depth of field. Combined with the characteristics of cytopathological images, this paper designs a new generator architecture combining U-Net and DenseBlock, which can effectively improve the network's receptive field and comprehensively encode image features. Meanwhile, this paper develops a semantic segmentation network that identifies the blurred regions in cytopathological images. By integrating the network into the generative model, the quality of the generated fused images is effectively improved. Our method can generate fused images from only single-focus or few-focus images, thereby avoiding the problem of collecting multiple images of different focus depths with increased time and hardware costs. Furthermore, our model is designed to learn the direct mapping of input source images to fused images without the need to manually design complex activity level measurements and fusion rules as in traditional methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源