论文标题
通过丰富的原型生成和经常性预测增强,很少分割
Few-Shot Segmentation via Rich Prototype Generation and Recurrent Prediction Enhancement
论文作者
论文摘要
原型学习和解码器结构是几个射击分割的关键。但是,现有方法仅使用单个原型生成模式,该模式无法应对具有各种尺度的对象的棘手问题。此外,以前方法采用的单向前进传播可能会导致解码过程中注册特征的信息稀释。在这项研究中,我们提出了丰富的原型生成模块(RPGM)和一个经常性预测增强模块(RPEM),以加强原型学习范式并分别构建一个统一的存储器启动解码器,以进行几个片段段。具体而言,RPGM结合了Superpixel和K-Means聚类,以生成具有互补比例关系的丰富原型特征,并适应支持图像和查询图像之间的比例差距。 RPEM利用复发机制设计了圆向传播解码器。这样,注册功能可以连续提供对象感知信息。实验表明,我们的方法始终在两个流行的基准标准上胜过其他竞争对手 - $ {{5}^{i}}} $和可可 - $ {{20}^{i}}} $。
Prototype learning and decoder construction are the keys for few-shot segmentation. However, existing methods use only a single prototype generation mode, which can not cope with the intractable problem of objects with various scales. Moreover, the one-way forward propagation adopted by previous methods may cause information dilution from registered features during the decoding process. In this research, we propose a rich prototype generation module (RPGM) and a recurrent prediction enhancement module (RPEM) to reinforce the prototype learning paradigm and build a unified memory-augmented decoder for few-shot segmentation, respectively. Specifically, the RPGM combines superpixel and K-means clustering to generate rich prototype features with complementary scale relationships and adapt the scale gap between support and query images. The RPEM utilizes the recurrent mechanism to design a round-way propagation decoder. In this way, registered features can provide object-aware information continuously. Experiments show that our method consistently outperforms other competitors on two popular benchmarks PASCAL-${{5}^{i}}$ and COCO-${{20}^{i}}$.