论文标题
TFCN:CNN-Transformer Hybrid网络用于医疗图像分割
TFCNs: A CNN-Transformer Hybrid Network for Medical Image Segmentation
论文作者
论文摘要
医疗图像细分是有关医学信息分析的最基本任务之一。到目前为止,已经提出了各种解决方案,包括许多基于深度学习的技术,例如U-NET,FC-DENSENET等。但是,由于医学图像中存在固有的放大倍率和失真,以及与正常组织相似的病变的存在,高精度医学图像分割仍然是一项高度挑战的任务。在本文中,我们提出了TFCN(用于完全卷积的致密齿轮的变压器),以通过引入ReslineAr-Transear-TransFormer(RL-transformer)和卷积线性注意块(CLAB)来解决该问题。 TFCN不仅能够从CT图像中利用更多的潜在信息进行特征提取,而且还可以通过CLAB模块更有效地捕获和传播语义特征,并更有效地筛选非语义功能。我们的实验结果表明,TFCN可以在Synapse数据集上以83.72 \%的骰子得分来实现最先进的性能。此外,我们评估了TFCN对COVID-19公共数据集的病变区域影响的鲁棒性。 Python代码将在https://github.com/huanglizi/tfcns上公开提供。
Medical image segmentation is one of the most fundamental tasks concerning medical information analysis. Various solutions have been proposed so far, including many deep learning-based techniques, such as U-Net, FC-DenseNet, etc. However, high-precision medical image segmentation remains a highly challenging task due to the existence of inherent magnification and distortion in medical images as well as the presence of lesions with similar density to normal tissues. In this paper, we propose TFCNs (Transformers for Fully Convolutional denseNets) to tackle the problem by introducing ResLinear-Transformer (RL-Transformer) and Convolutional Linear Attention Block (CLAB) to FC-DenseNet. TFCNs is not only able to utilize more latent information from the CT images for feature extraction, but also can capture and disseminate semantic features and filter non-semantic features more effectively through the CLAB module. Our experimental results show that TFCNs can achieve state-of-the-art performance with dice scores of 83.72\% on the Synapse dataset. In addition, we evaluate the robustness of TFCNs for lesion area effects on the COVID-19 public datasets. The Python code will be made publicly available on https://github.com/HUANGLIZI/TFCNs.