论文标题
L2G:一个简单的本地到全球知识转移框架,用于弱监督语义细分
L2G: A Simple Local-to-Global Knowledge Transfer Framework for Weakly Supervised Semantic Segmentation
论文作者
论文摘要
挖掘精确的班级感知注意图,也就是类激活图,对于弱监督的语义分割至关重要。在本文中,我们提出了L2G,这是一个简单的在线本地到全球知识转移框架,用于高质量的对象注意挖掘。我们观察到,分类模型可以在用本地补丁替换输入图像时发现对象区域的更多详细信息。考虑到这一点,我们首先利用本地分类网络从输入图像随机裁剪的多个本地补丁中提取注意。然后,我们利用一个全球网络在线学习互补的关注知识。我们的框架进行了全球网络,以从全球视图中学习捕获的丰富对象详细知识,从而产生高质量的注意图,可以直接用作语义分割网络的伪注释。实验表明,我们的方法分别在Pascal VOC 2012和Coco 2014的验证集上获得了72.1%和44.2%的分数,创造了新的最新记录。代码可在https://github.com/pengtaojiang/l2g上找到。
Mining precise class-aware attention maps, a.k.a, class activation maps, is essential for weakly supervised semantic segmentation. In this paper, we present L2G, a simple online local-to-global knowledge transfer framework for high-quality object attention mining. We observe that classification models can discover object regions with more details when replacing the input image with its local patches. Taking this into account, we first leverage a local classification network to extract attentions from multiple local patches randomly cropped from the input image. Then, we utilize a global network to learn complementary attention knowledge across multiple local attention maps online. Our framework conducts the global network to learn the captured rich object detail knowledge from a global view and thereby produces high-quality attention maps that can be directly used as pseudo annotations for semantic segmentation networks. Experiments show that our method attains 72.1% and 44.2% mIoU scores on the validation set of PASCAL VOC 2012 and MS COCO 2014, respectively, setting new state-of-the-art records. Code is available at https://github.com/PengtaoJiang/L2G.