论文标题
自我监督的特征图扩大(FMA)损失和联合增强填充,以有效提高CNN的鲁棒性
A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs
论文作者
论文摘要
深度神经网络通常不适合输入的语义 - 无关紧要的变化。在这项工作中,我们解决了最先进的深度卷积神经网络(CNN)的鲁棒性问题,以针对输入中通常发生的扭曲(如光度变化或添加模糊和噪声)的常见扭曲。输入中的这些变化通常以数据增强形式进行培训。我们有两个主要的贡献:首先,我们提出了一种称为特征映射增强(FMA)损失的新正规化损失,可以在Fineting期间使用,以使模型可鲁棒化,以使输入中的几种扭曲。其次,我们提出了一种新的合并增强(CA)列出策略,该策略会导致单个模型以数据效率的方式同时对几种增强类型具有鲁棒性。我们使用CA策略来改善称为稳定训练(ST)的现有最新方法。使用CA,在带有扭曲图像的图像分类任务上,我们平均将FMA的准确提高8.94%,而CIFAR-10的ST绝对为8.86%,而在CIFAR-10上的ST Absolute和FMA的8.04%和8.27%的ImabeNet上的ST Absolute在ImageNet上的8.27%相比分别为1.98%和2.12%,同时具有众所周知的数据增强方法,同时保持了清洁的基础。
Deep neural networks are often not robust to semantically-irrelevant changes in the input. In this work we address the issue of robustness of state-of-the-art deep convolutional neural networks (CNNs) against commonly occurring distortions in the input such as photometric changes, or the addition of blur and noise. These changes in the input are often accounted for during training in the form of data augmentation. We have two major contributions: First, we propose a new regularization loss called feature-map augmentation (FMA) loss which can be used during finetuning to make a model robust to several distortions in the input. Second, we propose a new combined augmentations (CA) finetuning strategy, that results in a single model that is robust to several augmentation types at the same time in a data-efficient manner. We use the CA strategy to improve an existing state-of-the-art method called stability training (ST). Using CA, on an image classification task with distorted images, we achieve an accuracy improvement of on average 8.94% with FMA and 8.86% with ST absolute on CIFAR-10 and 8.04% with FMA and 8.27% with ST absolute on ImageNet, compared to 1.98% and 2.12%, respectively, with the well known data augmentation method, while keeping the clean baseline performance.