论文标题

在广义线性模型中鲁棒学习的修剪最大似然估计

Trimmed Maximum Likelihood Estimation for Robust Learning in Generalized Linear Models

论文作者

Awasthi, Pranjal, Das, Abhimanyu, Kong, Weihao, Sen, Rajat

论文摘要

我们研究在对抗性腐败下学习通用线性模型的问题。我们分析了一种称为迭代修剪的最大似然估计量的古典启发式,该估计量已知在实践中有效地抵抗标签腐败。在标签腐败下,我们证明了这个简单的估计器在广泛的一般线性模型上实现了最小风险,包括高斯回归,泊松回归和二项式回归。最后,我们将估计器扩展到标签和协变量腐败的更具挑战性的设置,并在该环境中证明了其稳健性和最佳性。

We study the problem of learning generalized linear models under adversarial corruptions. We analyze a classical heuristic called the iterative trimmed maximum likelihood estimator which is known to be effective against label corruptions in practice. Under label corruptions, we prove that this simple estimator achieves minimax near-optimal risk on a wide range of generalized linear models, including Gaussian regression, Poisson regression and Binomial regression. Finally, we extend the estimator to the more challenging setting of label and covariate corruptions and demonstrate its robustness and optimality in that setting as well.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源