论文标题
超级图像 - 3D医学成像分析的新2D透视
Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis
论文作者
论文摘要
在医学成像分析中,深度学习显示出令人鼓舞的结果。我们经常依靠体积数据来细分医疗图像,需要使用3D体系结构,这些架构被称赞为其捕获板间环境的能力。但是,由于这些网络中使用的3D卷积,最大合并,上互动以及其他操作,这些架构在时间和计算方面通常比其2D等效物更低。此外,很少有3D预算的模型权重,并且训练通常很困难。我们提出了一种简单而有效的2D方法,用于处理3D数据,同时在训练过程中有效地嵌入3D知识。我们建议将体积数据转换为2D超级图像,并使用2D网络进行分割,以解决这些挑战。我们的方法通过在3D图像中并排缝合切片来生成超分辨率图像。我们希望深层神经网络尽管丢失了深度信息,但仍将在空间上捕获和学习这些属性。这项工作旨在在处理体积数据时提出一种新颖的观点,并使用CNN和VIT网络以及自我监督的预处理测试假设。在获得相等的(即使不是优越的话)的同时,仅利用2D对应物的3D网络,模型的复杂性降低了三倍。由于体积数据相对较少,我们预计我们的方法将吸引更多的研究,尤其是在医学成像分析中。
In medical imaging analysis, deep learning has shown promising results. We frequently rely on volumetric data to segment medical images, necessitating the use of 3D architectures, which are commended for their capacity to capture interslice context. However, because of the 3D convolutions, max pooling, up-convolutions, and other operations utilized in these networks, these architectures are often more inefficient in terms of time and computation than their 2D equivalents. Furthermore, there are few 3D pretrained model weights, and pretraining is often difficult. We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training. We propose transforming volumetric data into 2D super images and segmenting with 2D networks to solve these challenges. Our method generates a super-resolution image by stitching slices side by side in the 3D image. We expect deep neural networks to capture and learn these properties spatially despite losing depth information. This work aims to present a novel perspective when dealing with volumetric data, and we test the hypothesis using CNN and ViT networks as well as self-supervised pretraining. While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold. Because volumetric data is relatively scarce, we anticipate that our approach will entice more studies, particularly in medical imaging analysis.