论文标题

通过隐式模型和最佳运输成本最小化回归

Regression via Implicit Models and Optimal Transport Cost Minimization

论文作者

Manchanda, Saurav, Doan, Khoa, Yadav, Pranjul, Keerthi, S. Sathiya

论文摘要

本文解决了回归的经典问题,该问题涉及地图的感应学习,$ y = f(x,z)$,$ z $表示噪声,$ f:\ mathbb {r}^n \ times \ times \ times \ m}最近,有条件的GAN(CGAN)已用于回归,并且在Gaussian过程回归(例如Gaussian Process Recessions)的其他标准方法上,它具有隐式对复杂噪声形式进行模拟的能力。但是,当前用于回归的CGAN实施使用了Minimax优化方法,使用经典的生成器 - 分歧架构架构,这是由于训练不稳定或未能汇聚的问题而臭名昭著的。在本文中,我们迈出了迈向隐式建模噪声的回归模型的又一步,并提出了一种直接优化真实概率分布$ p(y | x)$与估计的分布$ \ hat {p}(y | x)$之间的最佳运输成本,并且与最小值方法相关的问题并不受苦。在各种合成和现实世界数据集上,我们提出的解决方案可实现最新的结果。本文随附的代码可在“ https://github.com/gurdaspuriya/ot_regression”上获得。

This paper addresses the classic problem of regression, which involves the inductive learning of a map, $y=f(x,z)$, $z$ denoting noise, $f:\mathbb{R}^n\times \mathbb{R}^k \rightarrow \mathbb{R}^m$. Recently, Conditional GAN (CGAN) has been applied for regression and has shown to be advantageous over the other standard approaches like Gaussian Process Regression, given its ability to implicitly model complex noise forms. However, the current CGAN implementation for regression uses the classical generator-discriminator architecture with the minimax optimization approach, which is notorious for being difficult to train due to issues like training instability or failure to converge. In this paper, we take another step towards regression models that implicitly model the noise, and propose a solution which directly optimizes the optimal transport cost between the true probability distribution $p(y|x)$ and the estimated distribution $\hat{p}(y|x)$ and does not suffer from the issues associated with the minimax approach. On a variety of synthetic and real-world datasets, our proposed solution achieves state-of-the-art results. The code accompanying this paper is available at "https://github.com/gurdaspuriya/ot_regression".

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源