论文标题
通过循环一致的对抗网络进行机器人手术中的仪器分割的无监督学习
Towards Unsupervised Learning for Instrument Segmentation in Robotic Surgery with Cycle-Consistent Adversarial Networks
论文作者
论文摘要
内窥镜图像中的手术工具分割是一个重要的问题:这是朝着完整的仪器姿势估计的关键步骤,用于将术前和术中图像整合到内窥镜视图中。尽管许多基于卷积神经网络的最新方法都表现出了很大的结果,但进步的关键障碍在于获取大量手动宣布的图像,这对于算法来说是在各种外科手术场景中概括和运作良好的必要障碍。与手术图像数据本身不同,注释很难获取,并且可能具有可变的质量。另一方面,通过将机器人的前向运动模型和工具的CAD模型通过将其投影到图像平面上,可以自动生成合成注释。不幸的是,该模型非常不准确,不能用于监督图像分割模型的学习。由于生成的注释不会直接与由于错误引起的内窥镜图像对应,因此我们将问题作为未配对的图像到图像翻译表达出来,其中的目标是学习输入内窥镜图像之间的映射和使用对抗模型的相应注释之间的映射。我们的方法允许训练图像分割模型,而无需获取昂贵的注释,并可能在图像/注释数据的注释分布之外可能利用大型未标记的内窥镜图像收集。我们测试了我们在Endovis 2017挑战数据集上提出的方法,并表明它具有监督分割方法具有竞争力。
Surgical tool segmentation in endoscopic images is an important problem: it is a crucial step towards full instrument pose estimation and it is used for integration of pre- and intra-operative images into the endoscopic view. While many recent approaches based on convolutional neural networks have shown great results, a key barrier to progress lies in the acquisition of a large number of manually-annotated images which is necessary for an algorithm to generalize and work well in diverse surgical scenarios. Unlike the surgical image data itself, annotations are difficult to acquire and may be of variable quality. On the other hand, synthetic annotations can be automatically generated by using forward kinematic model of the robot and CAD models of tools by projecting them onto an image plane. Unfortunately, this model is very inaccurate and cannot be used for supervised learning of image segmentation models. Since generated annotations will not directly correspond to endoscopic images due to errors, we formulate the problem as an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation using an adversarial model. Our approach allows to train image segmentation models without the need to acquire expensive annotations and can potentially exploit large unlabeled endoscopic image collection outside the annotated distributions of image/annotation data. We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.