论文标题
LSM:低级视力的学习子空间最小化
LSM: Learning Subspace Minimization for Low-level Vision
论文作者
论文摘要
我们从新颖的角度研究了低级视力任务中的能量最小化问题。我们用可学习的子空间约束代替启发式正规化术语,并保留数据术语以利用从任务的第一个原则得出的域知识。该学习子空间最小化(LSM)框架统一了许多低级视觉任务的网络结构和参数,这使我们能够同时使用完全共享的参数来同时训练单个网络以同时进行多个任务,甚至只要能够制定其数据项,就可以将受过训练的网络概括为不看不见的任务。我们在四个低级任务上演示了LSM框架,包括交互式图像分割,视频分割,立体声匹配和光流,并在各种数据集中验证网络。实验表明,所提出的LSM生成的最先进的结果具有较小的模型大小,更快的训练收敛性和实时推理。
We study the energy minimization problem in low-level vision tasks from a novel perspective. We replace the heuristic regularization term with a learnable subspace constraint, and preserve the data term to exploit domain knowledge derived from the first principle of a task. This learning subspace minimization (LSM) framework unifies the network structures and the parameters for many low-level vision tasks, which allows us to train a single network for multiple tasks simultaneously with completely shared parameters, and even generalizes the trained network to an unseen task as long as its data term can be formulated. We demonstrate our LSM framework on four low-level tasks including interactive image segmentation, video segmentation, stereo matching, and optical flow, and validate the network on various datasets. The experiments show that the proposed LSM generates state-of-the-art results with smaller model size, faster training convergence, and real-time inference.