论文标题

VoxelMorph ++超越了颅库,并具有关键点监督和多通道实例优化

Voxelmorph++ Going beyond the cranial vault with keypoint supervision and multi-channel instance optimisation

论文作者

Heinrich, Mattias P., Hansen, Lasse

论文摘要

目前基于深度学习的图像注册中的大多数研究都以中等变形的幅度解决了患者间的大脑登记。最近的Learn2Reg医疗注册基准表明,直接采用空间变压器损失的VoxelMorph等单尺度U-NET体系结构通常不会超出颅库超出颅库,并且在腹部或患者内部的肺内肺部注册中没有最先进的性能。在这里,我们提出了两个直接的步骤,可以大大减少准确性的差距。首先,我们采用了一个新型的网络主管来预测离散的热图,并稳健地减少大变形以获得更好的鲁棒性。其次,我们通过用手工制作的功能和Adam优化器的单个实例优化替换了多个学习的微调步骤。与其他相关工作(包括Flownet或PDD-NET)不同,我们的方法不需要具有相关层的完全离散的体系结构。我们的消融研究表明,关键点在自我监督和无监督(仅使用心理度量)设置中的重要性。在一个以上为中心的Inspration-Exale-Exale-Exhale CT数据集(包括非常具有挑战性的COPD扫描)中,我们的方法通过将非线性对齐提高77%来优于voxelmorph,而19%的方法达到了19% - 达到目标注册错误的2 mm,除了2 mm均优于出现的所有学习方法。将方法扩展到语义功能,为主体间CT注册设置了新的状态性能。

The majority of current research in deep learning based image registration addresses inter-patient brain registration with moderate deformation magnitudes. The recent Learn2Reg medical registration benchmark has demonstrated that single-scale U-Net architectures, such as VoxelMorph that directly employ a spatial transformer loss, often do not generalise well beyond the cranial vault and fall short of state-of-the-art performance for abdominal or intra-patient lung registration. Here, we propose two straightforward steps that greatly reduce this gap in accuracy. First, we employ keypoint self-supervision with a novel network head that predicts a discretised heatmap and robustly reduces large deformations for better robustness. Second, we replace multiple learned fine-tuning steps by a single instance optimisation with hand-crafted features and the Adam optimiser. Different to other related work, including FlowNet or PDD-Net, our approach does not require a fully discretised architecture with correlation layer. Our ablation study demonstrates the importance of keypoints in both self-supervised and unsupervised (using only a MIND metric) settings. On a multi-centric inspiration-exhale lung CT dataset, including very challenging COPD scans, our method outperforms VoxelMorph by improving nonlinear alignment by 77% compared to 19% - reaching target registration errors of 2 mm that outperform all but one learning methods published to date. Extending the method to semantic features sets new stat-of-the-art performance on inter-subject abdominal CT registration.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源