论文标题
在没有转换监督的情况下,学会的模棱两可的渲染
Learned Equivariant Rendering without Transformation Supervision
论文作者
论文摘要
我们提出了一个自我监督的框架,以从自动描绘成对象和背景的视频中学习场景表示。我们的方法依赖于移动对象相对于它们跨帧的转换而言是恒定的。训练后,我们可以实时操纵和渲染场景,以创建对象,转换和背景的看不见的组合。我们显示了具有背景的MNIST的结果。
We propose a self-supervised framework to learn scene representations from video that are automatically delineated into objects and background. Our method relies on moving objects being equivariant with respect to their transformation across frames and the background being constant. After training, we can manipulate and render the scenes in real time to create unseen combinations of objects, transformations, and backgrounds. We show results on moving MNIST with backgrounds.