论文标题
测试时间培训可以缩小基于深度学习的压缩感应中的自然分配变化性能差距
Test-Time Training Can Close the Natural Distribution Shift Performance Gap in Deep Learning Based Compressed Sensing
论文作者
论文摘要
基于深度学习的图像重建方法优于传统方法。但是,当应用于与训练图像不同分布的图像时,神经网络会遭受性能下降。例如,即使在大脑上训练的相同网络可以很好地重建大脑,但在加速磁共振成像(MRI)中重建膝盖(MRI)中训练的模型并不能很好地重建大脑。因此,给定神经网络存在分配变化性能差距,该差距定义为在分配$ p $上进行培训以及对另一个分销$ q $的培训时的性能差异,并在$ q $上评估了这两个模型。在这项工作中,我们提出了一种域适应方法,用于基于深度学习的压缩感,该方法依赖于训练期间的自学,并在推理时配对测试时间培训。我们表明,对于四次自然分配变化,该方法实质上缩小了加速MRI最新架构的分布变化性能差距。
Deep learning based image reconstruction methods outperform traditional methods. However, neural networks suffer from a performance drop when applied to images from a different distribution than the training images. For example, a model trained for reconstructing knees in accelerated magnetic resonance imaging (MRI) does not reconstruct brains well, even though the same network trained on brains reconstructs brains perfectly well. Thus there is a distribution shift performance gap for a given neural network, defined as the difference in performance when training on a distribution $P$ and training on another distribution $Q$, and evaluating both models on $Q$. In this work, we propose a domain adaptation method for deep learning based compressive sensing that relies on self-supervision during training paired with test-time training at inference. We show that for four natural distribution shifts, this method essentially closes the distribution shift performance gap for state-of-the-art architectures for accelerated MRI.