论文标题
类级logit扰动
Class-Level Logit Perturbation
论文作者
论文摘要
当样本通过深层神经网络时,功能,逻辑和标签是三个主要数据。近年来,特征扰动和标签扰动受到越来越多的关注。事实证明,它们在各种深度学习方法中很有用。例如,(对抗性)特征扰动可以提高学习模型的鲁棒性甚至概括能力。但是,有限的研究已明确探索了对logit向量的扰动。这项工作讨论了几种与类级别logit扰动有关的现有方法。建立了logit扰动引起的正/负数据扩大和损失变化之间的统一观点。提供了理论分析以阐明为什么有用的类级logit扰动是有用的。因此,提出了新的方法,以明确学习单标签和多标签分类任务的扰动逻辑。基准图像分类数据集及其长尾版本的广泛实验表明我们的学习方法具有竞争性能。由于它仅在logit上,因此可以用作与任何现有分类算法融合的插件。所有代码均可在https://github.com/limengyang1992/lpl上找到。
Features, logits, and labels are the three primary data when a sample passes through a deep neural network. Feature perturbation and label perturbation receive increasing attention in recent years. They have been proven to be useful in various deep learning approaches. For example, (adversarial) feature perturbation can improve the robustness or even generalization capability of learned models. However, limited studies have explicitly explored for the perturbation of logit vectors. This work discusses several existing methods related to class-level logit perturbation. A unified viewpoint between positive/negative data augmentation and loss variations incurred by logit perturbation is established. A theoretical analysis is provided to illuminate why class-level logit perturbation is useful. Accordingly, new methodologies are proposed to explicitly learn to perturb logits for both single-label and multi-label classification tasks. Extensive experiments on benchmark image classification data sets and their long-tail versions indicated the competitive performance of our learning method. As it only perturbs on logit, it can be used as a plug-in to fuse with any existing classification algorithms. All the codes are available at https://github.com/limengyang1992/lpl.