论文标题

使用学习的功能描述符极度致密的点对应

Extremely Dense Point Correspondences using a Learned Feature Descriptor

论文作者

Liu, Xingtong, Zheng, Yiping, Killeen, Benjamin, Ishii, Masaru, Hager, Gregory D., Taylor, Russell H., Unberath, Mathias

论文摘要

内窥镜视频的高质量3D重建在许多临床应用中起着重要作用,包括手术导航,它们可以直接视频-CT注册。尽管有许多用于一般多视图3D重建的方法,但这些方法通常无法在内窥镜视频上提供令人满意的性能。部分原因是,在面对解剖学的纹理筛表面时,建立成对点对应并因此驱动重建的局部描述符。基于学习的密集描述符通常具有较大的接受场,可实现全球信息的编码,可用于消除歧义匹配。在这项工作中,我们为密集的描述符学习提供了有效的自学训练计划和新颖的损失设计。在与内部鼻窦内窥镜数据集中的近期局部和密集描述符直接比较时,我们证明了我们提出的密集描述符可以推广到看不见的患者和范围,从而在很大程度上在模型密度和完整性方面从运动(SFM)中改善了结构的性能。我们还可以在公共密集的光流数据集和小型SFM公共数据集上评估我们的方法,以进一步证明我们方法的有效性和通用性。源代码可在https://github.com/lppllppl920/densedescriptorlearning-pytorch上找到。

High-quality 3D reconstructions from endoscopy video play an important role in many clinical applications, including surgical navigation where they enable direct video-CT registration. While many methods exist for general multi-view 3D reconstruction, these methods often fail to deliver satisfactory performance on endoscopic video. Part of the reason is that local descriptors that establish pair-wise point correspondences, and thus drive reconstruction, struggle when confronted with the texture-scarce surface of anatomy. Learning-based dense descriptors usually have larger receptive fields enabling the encoding of global information, which can be used to disambiguate matches. In this work, we present an effective self-supervised training scheme and novel loss design for dense descriptor learning. In direct comparison to recent local and dense descriptors on an in-house sinus endoscopy dataset, we demonstrate that our proposed dense descriptor can generalize to unseen patients and scopes, thereby largely improving the performance of Structure from Motion (SfM) in terms of model density and completeness. We also evaluate our method on a public dense optical flow dataset and a small-scale SfM public dataset to further demonstrate the effectiveness and generality of our method. The source code is available at https://github.com/lppllppl920/DenseDescriptorLearning-Pytorch.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源