论文标题
通过学习潜在空间的有条件生成建模
Conditional Generative Modeling via Learning the Latent Space
论文作者
论文摘要
尽管深度学习在几项机器学习任务上取得了吸引力的结果,但大多数模型在推断时都是确定性的,将其应用于单模式设置。我们为在多模式空间中有条件生成的新型通用框架提出了一种新型的通用框架,该框架使用潜在变量来建模可通用的学习模式,同时最大程度地减少回归成本函数的家族。在推断时,对潜在变量进行了优化,以找到与多个输出模式相对应的最佳解决方案。与现有的生成解决方案相比,在多模式空间中,我们的方法证明了更快,稳定的收敛性,并且可以学习更好的下游任务表示。重要的是,它提供了一个简单的通用模型,可以击败使用域专业知识在各种任务上量身定制的高度工程管道,同时产生各种输出。我们的代码将发布。
Although deep learning has achieved appealing results on several machine learning tasks, most of the models are deterministic at inference, limiting their application to single-modal settings. We propose a novel general-purpose framework for conditional generation in multimodal spaces, that uses latent variables to model generalizable learning patterns while minimizing a family of regression cost functions. At inference, the latent variables are optimized to find optimal solutions corresponding to multiple output modes. Compared to existing generative solutions, in multimodal spaces, our approach demonstrates faster and stable convergence, and can learn better representations for downstream tasks. Importantly, it provides a simple generic model that can beat highly engineered pipelines tailored using domain expertise on a variety of tasks, while generating diverse outputs. Our codes will be released.