论文标题

语义细分,并通过主动半监督学习

Semantic Segmentation with Active Semi-Supervised Learning

论文作者

Rangnekar, Aneesh, Kanan, Christopher, Hoffman, Matthew

论文摘要

使用深度学习,我们现在有能力创建异常良好的语义分割系统。但是,为训练图像收集先决条件的像素注释仍然昂贵且耗时。因此,最小化创建新数据集所需的人类注释的数量是理想的选择。在这里,我们通过提出一种新颖的算法来解决这个问题,该算法结合了活跃的学习和半监督学习。主动学习是一种识别注释最佳的未标记样本的方法。尽管有积极学习进行分割的工作,但大多数方法都需要注释每个图像中的所有像素对象,而不仅仅是最有用的区域。我们认为这效率低下。取而代之的是,我们的主动学习方法旨在最大程度地减少注释人数的数量。我们的方法充满了半监督的学习,我们使用使用教师学生框架生成的伪标签来识别有助于消除困惑类的图像区域。我们还整合了能够在不平衡的标签分布上更好地性能的机制,这些分布尚未在语义细分中进行积极学习。在有关Camvid和CityScapes数据集的实验中,我们的方法使用不到17%的培训数据获得了网络在全训练集上的95%以上,而先前的最新技术需要40%的培训数据。

Using deep learning, we now have the ability to create exceptionally good semantic segmentation systems; however, collecting the prerequisite pixel-wise annotations for training images remains expensive and time-consuming. Therefore, it would be ideal to minimize the number of human annotations needed when creating a new dataset. Here, we address this problem by proposing a novel algorithm that combines active learning and semi-supervised learning. Active learning is an approach for identifying the best unlabeled samples to annotate. While there has been work on active learning for segmentation, most methods require annotating all pixel objects in each image, rather than only the most informative regions. We argue that this is inefficient. Instead, our active learning approach aims to minimize the number of annotations per-image. Our method is enriched with semi-supervised learning, where we use pseudo labels generated with a teacher-student framework to identify image regions that help disambiguate confused classes. We also integrate mechanisms that enable better performance on imbalanced label distributions, which have not been studied previously for active learning in semantic segmentation. In experiments on the CamVid and CityScapes datasets, our method obtains over 95% of the network's performance on the full-training set using less than 17% of the training data, whereas the previous state of the art required 40% of the training data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源