论文标题

Scribble255:通过涂鸦注释弱监督的体积图像分割

Scribble2D5: Weakly-Supervised Volumetric Image Segmentation via Scribble Annotations

论文作者

Chen, Qiuhui, Hong, Yi

论文摘要

最近,使用诸如涂鸦之类的弱注释的弱监督图像分割引起了人们的关注,因为与像素/体素水平上的耗时和标签密集型标记相比,这种注释更容易获得。但是,由于涂鸦缺乏感兴趣区域(ROI)的结构信息,因此现有的基于涂鸦的方法的边界定位不良。此外,大多数当前方法都是为2D图像分割而设计的,如果直接应用于图像切片,它们不会完全利用体积信息。在本文中,我们提出了一个基于涂鸦的体积图像分割,即Scribble2D5,该图像分割,可解决3D各向异性图像分割并改善边界预测。为了实现这一目标,我们使用提出的标签传播模块增强了2.5D注意的UNET,以扩展涂鸦的语义信息以及静态和主动边界预测的组合,以学习ROI的边界并正常其形状。在三个公共数据集上进行的广泛实验证明了Scribble2d5的表现明显优于当前基于涂鸦的方法,并处理了完全监督的方法。我们的代码可在线提供。

Recently, weakly-supervised image segmentation using weak annotations like scribbles has gained great attention, since such annotations are much easier to obtain compared to time-consuming and label-intensive labeling at the pixel/voxel level. However, because scribbles lack structure information of region of interest (ROI), existing scribble-based methods suffer from poor boundary localization. Furthermore, most current methods are designed for 2D image segmentation, which do not fully leverage the volumetric information if directly applied to image slices. In this paper, we propose a scribble-based volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image segmentation and improves boundary prediction. To achieve this, we augment a 2.5D attention UNet with a proposed label propagation module to extend semantic information from scribbles and a combination of static and active boundary prediction to learn ROI's boundary and regularize its shape. Extensive experiments on three public datasets demonstrate Scribble2D5 significantly outperforms current scribble-based methods and approaches the performance of fully-supervised ones. Our code is available online.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源