论文标题
在单眼腹腔镜训练中增强基于现实的缝合
Towards Augmented Reality-based Suturing in Monocular Laparoscopic Training
论文作者
论文摘要
微创手术(MIS)技术已经在外科医生中迅速流行,因为它们提供了重大的临床益处,包括减少恢复时间和术后不良反应减少。但是,传统的内窥镜系统输出单眼视频,该视频损害了深度感知,空间取向和视野。缝合是在这种情况下执行的最复杂的任务之一。该任务的关键组成部分是针固定器和手术针头之间的相互作用。可靠的3D针对针和仪器的定位可以用来用其他参数来增强场景,以描述其定量几何关系,例如估计的针平面及其旋转中心与仪器之间的关系。这可能有助于对基本技能和手术技术的标准化和培训,提高整体手术表现并降低并发症的风险。本文提出了一个增强的现实环境,并具有定量和定性的视觉表示,以增强在硅胶垫上执行的腹腔镜训练结果。这是由多任务监督的深神经网络启用的,该网络执行多级分段和深度图图预测。通过创建一个类似于手术训练方案以生成密集的深度图和分割图的虚拟环境,征服了标签的稀缺性。拟议的卷积神经网络在实际的手术训练场景上进行了测试,并显示针对针的闭合是可靠的。该网络的手术针切除术的骰子得分为0.67,针固定器仪器分割的骰子得分为0.81,对于深度估计,针对固定器仪器的插件得分为0.67,平均绝对误差为6.5 mm。
Minimally Invasive Surgery (MIS) techniques have gained rapid popularity among surgeons since they offer significant clinical benefits including reduced recovery time and diminished post-operative adverse effects. However, conventional endoscopic systems output monocular video which compromises depth perception, spatial orientation and field of view. Suturing is one of the most complex tasks performed under these circumstances. Key components of this tasks are the interplay between needle holder and the surgical needle. Reliable 3D localization of needle and instruments in real time could be used to augment the scene with additional parameters that describe their quantitative geometric relation, e.g. the relation between the estimated needle plane and its rotation center and the instrument. This could contribute towards standardization and training of basic skills and operative techniques, enhance overall surgical performance, and reduce the risk of complications. The paper proposes an Augmented Reality environment with quantitative and qualitative visual representations to enhance laparoscopic training outcomes performed on a silicone pad. This is enabled by a multi-task supervised deep neural network which performs multi-class segmentation and depth map prediction. Scarcity of labels has been conquered by creating a virtual environment which resembles the surgical training scenario to generate dense depth maps and segmentation maps. The proposed convolutional neural network was tested on real surgical training scenarios and showed to be robust to occlusion of the needle. The network achieves a dice score of 0.67 for surgical needle segmentation, 0.81 for needle holder instrument segmentation and a mean absolute error of 6.5 mm for depth estimation.