论文标题

通过熵正则化的对抗性稳健学习

Adversarially Robust Learning via Entropic Regularization

论文作者

Jagatap, Gauri, Joshi, Ameya, Chowdhury, Animesh Basak, Garg, Siddharth, Hegde, Chinmay

论文摘要

在本文中,我们提出了一种新的算法系列,即培训对抗性的深层神经网络。我们制定了一个新的损失函数,该功能配备了额外的熵正则化。我们的损失功能考虑了从数据空间中专门设计的分布中得出的对抗样本的贡献,该分布将高概率分配给高损失的点以及训练样本的直接邻里。我们提出的算法优化了这种损失,以寻求对损失景观的对抗性稳定的山谷。与在MNIST和CIFAR-10等基准数据集上使用的几种最新的强大学习方法相比,我们的方法在鲁棒分类的准确性方面取得了竞争性(或更好)的表现。

In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adversarially robust valleys of the loss landscape. Our approach achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源