论文标题

基于模型的强大深度学习:概括自然,分布数据

Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data

论文作者

Robey, Alexander, Hassani, Hamed, Pappas, George J.

论文摘要

尽管深度学习导致了许多应用领域的重大突破,但深度学习中常用的框架仍然脆弱,这些框架在人工制作的数据中不可察觉。为了应对这种脆弱性,对抗性训练已成为一种原则性的方法,可以增强对范围内的扰动的深度学习的鲁棒性。但是,还有其他用于深度学习的脆弱性来源,可以说更常见且不太详尽地研究。确实,诸如照明或天气状况之类的自然变化会大大降低训练有素的神经网络的准确性,证明这种自然变化给深度学习带来了重大挑战。 在本文中,我们提出了从基于扰动的对抗性鲁棒性向基于模型的鲁棒深度学习的范式转变。我们的目标是提供一般培训算法,这些算法可用于训练深层神经网络以抵抗数据自然变化。对我们的范式至关重要的是首先获得自然变异模型,该模型可用于在一系列自然条件下改变数据。这样的模型可以先验是先验的,也可以从数据中学到。在后一种情况下,我们表明,深层生成模型可用于学习与现实条件一致的自然变异模型。然后,我们在三种基于新型模型的鲁棒训练算法中利用此类模型,以增强对给定模型的深度学习的鲁棒性。我们的广泛实验表明,在各种自然存在的条件和各个数据集中,通过我们的基于模型的算法训练的深度神经网络都显着胜过标准的深度学习算法以及符合规范的强大深度学习算法。

While deep learning has resulted in major breakthroughs in many application domains, the frameworks commonly used in deep learning remain fragile to artificially-crafted and imperceptible changes in the data. In response to this fragility, adversarial training has emerged as a principled approach for enhancing the robustness of deep learning with respect to norm-bounded perturbations. However, there are other sources of fragility for deep learning that are arguably more common and less thoroughly studied. Indeed, natural variation such as lighting or weather conditions can significantly degrade the accuracy of trained neural networks, proving that such natural variation presents a significant challenge for deep learning. In this paper, we propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning. Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data. Critical to our paradigm is first obtaining a model of natural variation which can be used to vary data over a range of natural conditions. Such models may be either known a priori or else learned from data. In the latter case, we show that deep generative models can be used to learn models of natural variation that are consistent with realistic conditions. We then exploit such models in three novel model-based robust training algorithms in order to enhance the robustness of deep learning with respect to the given model. Our extensive experiments show that across a variety of naturally-occurring conditions and across various datasets, deep neural networks trained with our model-based algorithms significantly outperform both standard deep learning algorithms as well as norm-bounded robust deep learning algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源