论文标题

text2live:文本驱动的分层图像和​​视频编辑

Text2LIVE: Text-Driven Layered Image and Video Editing

论文作者

Bar-Tal, Omer, Ofri-Amar, Dolev, Fridman, Rafail, Kasten, Yoni, Dekel, Tali

论文摘要

我们提出了一种自然图像和视频中零拍,文本驱动外观操纵的方法。给定输入图像或视频和目标文本提示,我们的目标是编辑现有对象的外观(例如对象的纹理)或以语义上有意义的方式以视觉效果(例如,烟雾,火)增强场景。我们使用内部培训示例的内部数据集训练生成器,并从单个输入(图像或视频和目标文本提示)中提取,同时利用外部预训练的剪辑模型来确定我们的损失。我们的关键思想不是直接生成编辑的输出,而是生成由原始输入组合的编辑层(颜色+不透明度)。这使我们能够通过直接应用于编辑层的新型文本驱动损失来限制生成过程并将高保真度保持到原始输入。我们的方法既不依赖于预训练的生成器,也不需要用户提供的编辑掩码。我们展示了有关各种物体和场景的高分辨率自然图像和视频的局部语义编辑。

We present a method for zero-shot, text-driven appearance manipulation in natural images and videos. Given an input image or video and a target text prompt, our goal is to edit the appearance of existing objects (e.g., object's texture) or augment the scene with visual effects (e.g., smoke, fire) in a semantically meaningful manner. We train a generator using an internal dataset of training examples, extracted from a single input (image or video and target text prompt), while leveraging an external pre-trained CLIP model to establish our losses. Rather than directly generating the edited output, our key idea is to generate an edit layer (color+opacity) that is composited over the original input. This allows us to constrain the generation process and maintain high fidelity to the original input via novel text-driven losses that are applied directly to the edit layer. Our method neither relies on a pre-trained generator nor requires user-provided edit masks. We demonstrate localized, semantic edits on high-resolution natural images and videos across a variety of objects and scenes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源