论文标题

通过剪辑引导,像素级优化迈向实时text2Video

Towards Real-Time Text2Video via CLIP-Guided, Pixel-Level Optimization

论文作者

Schaldenbrand, Peter, Liu, Zhixuan, Oh, Jean

论文摘要

我们介绍了一种根据一系列给定语言描述来生成视频的方法。视频的框架是依次生成的,并通过剪辑图像文本编码器的指导进行了优化;通过语言描述迭代,将当前描述加权比其他描述更高。与通过图像发生器模型本身进行优化(倾向于在计算上很重),所提出的方法直接在像素级别计算剪辑损耗,以适合接近实时系统的速度实现一般内容。该方法可以以每秒1-2帧的速率生成最多720p分辨率,可变框架比率和任意纵横比的视频。请访问我们的网站以查看视频并访问我们的开源代码:https://pschaldenbrand.github.io/text2video/。

We introduce an approach to generating videos based on a series of given language descriptions. Frames of the video are generated sequentially and optimized by guidance from the CLIP image-text encoder; iterating through language descriptions, weighting the current description higher than others. As opposed to optimizing through an image generator model itself, which tends to be computationally heavy, the proposed approach computes the CLIP loss directly at the pixel level, achieving general content at a speed suitable for near real-time systems. The approach can generate videos in up to 720p resolution, variable frame-rates, and arbitrary aspect ratios at a rate of 1-2 frames per second. Please visit our website to view videos and access our open-source code: https://pschaldenbrand.github.io/text2video/ .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源