论文标题
部分可观测时空混沌系统的无模型预测
StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation
论文作者
论文摘要
文本对图像合成的最新进展导致了较大的经过验证的变压器,具有出色的功能,可以从给定文本产生可视化。但是,这些模型不适合专门的任务,例如故事可视化,该任务需要代理来制作一系列图像,并给出相应的字幕序列,形成叙事。此外,我们发现故事可视化任务无法适应新叙事中未见的情节和角色的概括。因此,我们首先提出了故事延续的任务,其中生成的视觉故事是在源图像上进行的,从而可以更好地对具有新角色的叙述进行更好的概括。然后,我们使用特定于(a)顺序图像生成的任务模块和(b)从初始帧复制相关元素的任务特定模块来增强或“重新拟合”。然后,我们探讨了预训练模型的全模型框以及以进行参数效率调整的及时调整。我们在两个现有数据集(PororoSV和FlintStonessv)上评估了我们的方法storydall-e,并介绍了从视频启动数据集收集的新数据集DIDEMOSV。我们还基于生成的对抗网络(GAN)开发了一个模型故事,以进行故事的延续,并将其与StoryDall-E模型进行比较,以展示我们方法的优势。我们表明,我们的复古拟合方法优于基于GAN的模型,用于故事持续,并促进从源图像中复制视觉元素,从而改善了生成的视觉故事中的连续性。最后,我们的分析表明,经过审计的变压器努力理解包含几个角色的叙述。总体而言,我们的工作表明,可以验证的文本对图像合成模型可以适应复杂和低资源的任务,例如故事延续。
Recent advances in text-to-image synthesis have led to large pretrained transformers with excellent capabilities to generate visualizations from a given text. However, these models are ill-suited for specialized tasks like story visualization, which requires an agent to produce a sequence of images given a corresponding sequence of captions, forming a narrative. Moreover, we find that the story visualization task fails to accommodate generalization to unseen plots and characters in new narratives. Hence, we first propose the task of story continuation, where the generated visual story is conditioned on a source image, allowing for better generalization to narratives with new characters. Then, we enhance or 'retro-fit' the pretrained text-to-image synthesis models with task-specific modules for (a) sequential image generation and (b) copying relevant elements from an initial frame. Then, we explore full-model finetuning, as well as prompt-based tuning for parameter-efficient adaptation, of the pre-trained model. We evaluate our approach StoryDALL-E on two existing datasets, PororoSV and FlintstonesSV, and introduce a new dataset DiDeMoSV collected from a video-captioning dataset. We also develop a model StoryGANc based on Generative Adversarial Networks (GAN) for story continuation, and compare it with the StoryDALL-E model to demonstrate the advantages of our approach. We show that our retro-fitting approach outperforms GAN-based models for story continuation and facilitates copying of visual elements from the source image, thereby improving continuity in the generated visual story. Finally, our analysis suggests that pretrained transformers struggle to comprehend narratives containing several characters. Overall, our work demonstrates that pretrained text-to-image synthesis models can be adapted for complex and low-resource tasks like story continuation.