论文标题

通过自我促进的注意机制改善低数据制度中细粒的视觉识别

Improving Fine-Grained Visual Recognition in Low Data Regimes via Self-Boosting Attention Mechanism

论文作者

Shu, Yangyang, Yu, Baosheng, Xu, Haiming, Liu, Lingqiao

论文摘要

细粒度识别的挑战通常在于发现关键的歧视区域。虽然可以从大规模标记的数据集中自动识别此类区域,但是当仅提供少量注释时,类似的方法可能会变得较低。在低数据制度中,网络通常很难选择正确的区域以识别识别,并且倾向于从培训数据中过度适合虚假相关模式。为了解决这个问题,本文提出了一种自我提高的注意机制,这是一种新颖的方法,可以使网络正规化关注跨样本和类共享的关键区域。具体而言,提出的方法首先为每个训练图像生成一个注意图,突出显示识别基地真实对象类别的判别部分。然后将产生的注意图用作伪通量。该网络被执行以将其作为一项辅助任务。我们将这种方法称为自我提高注意机制(SAM)。我们还通过使用SAM创建多个注意地图来开发一个变体,以一种以双线性合并风格(称为Sam-bienear)样式的池卷积图。通过广泛的实验研究,我们表明两种方法都可以显着提高低数据状态上的细粒视觉识别性能,并可以将其纳入现有网络体系结构中。源代码可公开可用:https://github.com/ganperf/sam

The challenge of fine-grained visual recognition often lies in discovering the key discriminative regions. While such regions can be automatically identified from a large-scale labeled dataset, a similar method might become less effective when only a few annotations are available. In low data regimes, a network often struggles to choose the correct regions for recognition and tends to overfit spurious correlated patterns from the training data. To tackle this issue, this paper proposes the self-boosting attention mechanism, a novel method for regularizing the network to focus on the key regions shared across samples and classes. Specifically, the proposed method first generates an attention map for each training image, highlighting the discriminative part for identifying the ground-truth object category. Then the generated attention maps are used as pseudo-annotations. The network is enforced to fit them as an auxiliary task. We call this approach the self-boosting attention mechanism (SAM). We also develop a variant by using SAM to create multiple attention maps to pool convolutional maps in a style of bilinear pooling, dubbed SAM-Bilinear. Through extensive experimental studies, we show that both methods can significantly improve fine-grained visual recognition performance on low data regimes and can be incorporated into existing network architectures. The source code is publicly available at: https://github.com/GANPerf/SAM

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源