论文标题
在细粒度的视觉分类中学习类独特的特征
Learning Class Unique Features in Fine-Grained Visual Classification
论文作者
论文摘要
细颗粒视觉分类(FGVC)的主要挑战是通过学习区分细节的功能来区分具有高层间相似性的各种类别。常规的跨熵训练的卷积神经网络(CNN)失败了这一挑战,因为它可能因在FGVC中产生类间不变特征而受到影响。在这项工作中,我们创新地建议通过从信息理论的角度来实现每个类别的功能的唯一性来正规化CNN的培训。为了实现这一目标,我们根据游戏理论框架制定了最小值损失,在该框架中,纳什均衡被证明与这个正则化目标是一致的。此外,为了防止可能产生冗余特征的最小值损失的可行解决方案,我们基于每个选定的特征映射对之间的归一化内部产品提供特征冗余损失(FRL),以补充所提出的最小值损耗。对几个有影响力的基准测试以及可视化的出色实验结果表明,我们的方法为基线模型的性能提供了全面发挥,而无需其他计算,并且与最先进的模型实现了可比的结果。
A major challenge in Fine-Grained Visual Classification (FGVC) is distinguishing various categories with high inter-class similarity by learning the feature that differentiate the details. Conventional cross entropy trained Convolutional Neural Network (CNN) fails this challenge as it may suffer from producing inter-class invariant features in FGVC. In this work, we innovatively propose to regularize the training of CNN by enforcing the uniqueness of the features to each category from an information theoretic perspective. To achieve this goal, we formulate a minimax loss based on a game theoretic framework, where a Nash equilibria is proved to be consistent with this regularization objective. Besides, to prevent from a feasible solution of minimax loss that may produce redundant features, we present a Feature Redundancy Loss (FRL) based on normalized inner product between each selected feature map pair to complement the proposed minimax loss. Superior experimental results on several influential benchmarks along with visualization show that our method gives full play to the performance of the baseline model without additional computation and achieves comparable results with state-of-the-art models.