论文标题

树木能量损失:偏见带注释的语义分段

Tree Energy Loss: Towards Sparsely Annotated Semantic Segmentation

论文作者

Liang, Zhiyuan, Wang, Tiancai, Zhang, Xiangyu, Sun, Jian, Shen, Jianbing

论文摘要

稀疏注释的语义分割(SASS)旨在训练一个分割网络,其分割网络具有粗粒(即点,涂鸦和块状)的监督,其中每个图像中只有一小部分像素。在本文中,我们通过为未标记的像素提供语义指导,为SASS提出了一种新型的树木能量损失。树木能量损失表示图像是最小跨越树木的图像,以模拟低级和高级配对亲和力。通过将这些亲和力依次应用于网络预测,以粗到未标记的方式生成无标记像素的软伪标签,从而实现了动态的在线自我训练。通过将树木的能量损失与传统的分割损失相结合,可以有效且易于将其纳入现有框架中。与以前的SASS方法相比,我们的方法不需要多阶段培训策略,交替的优化程序,其他监督数据或耗时的后处理,同时在所有SASS设置中都表现出色。代码可从https://github.com/megvii-research/treeenergyloss获得。

Sparsely annotated semantic segmentation (SASS) aims to train a segmentation network with coarse-grained (i.e., point-, scribble-, and block-wise) supervisions, where only a small proportion of pixels are labeled in each image. In this paper, we propose a novel tree energy loss for SASS by providing semantic guidance for unlabeled pixels. The tree energy loss represents images as minimum spanning trees to model both low-level and high-level pair-wise affinities. By sequentially applying these affinities to the network prediction, soft pseudo labels for unlabeled pixels are generated in a coarse-to-fine manner, achieving dynamic online self-training. The tree energy loss is effective and easy to be incorporated into existing frameworks by combining it with a traditional segmentation loss. Compared with previous SASS methods, our method requires no multistage training strategies, alternating optimization procedures, additional supervised data, or time-consuming post-processing while outperforming them in all SASS settings. Code is available at https://github.com/megvii-research/TreeEnergyLoss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源