论文标题

我可以看到一个例子吗?积极学习属性和关系的长尾巴

Can I see an Example? Active Learning the Long Tail of Attributes and Relations

论文作者

Hayes, Tyler L., Nickel, Maximilian, Kanan, Christopher, Denoyer, Ludovic, Szlam, Arthur

论文摘要

在创建机器学习模型方面取得了重大进展,该模型可以在场景中识别对象及其相关属性和关系。但是,最好的模型和人类能力之间存在很大的差距。造成这一差距的主要原因之一是很难收集足够数量的注释关系和培训这些系统的属性。尽管某些属性和关系很丰富,但自然世界和现有数据集的分布已长期尾部。在本文中,我们通过引入一个新颖的增量主动学习框架来解决这个问题,该框架要求在视觉场景中索取属性和关系。虽然传统的主动学习方法要求提供特定示例的标签,但我们将其翻转此框架,以允许代理商从特定类别中要求示例。使用此框架,我们引入了一种主动采样方法,该方法要求从数据分布的尾部提供示例,并表明它在视觉基因组上的表现优于经典的活性学习方法。

There has been significant progress in creating machine learning models that identify objects in scenes along with their associated attributes and relationships; however, there is a large gap between the best models and human capabilities. One of the major reasons for this gap is the difficulty in collecting sufficient amounts of annotated relations and attributes for training these systems. While some attributes and relations are abundant, the distribution in the natural world and existing datasets is long tailed. In this paper, we address this problem by introducing a novel incremental active learning framework that asks for attributes and relations in visual scenes. While conventional active learning methods ask for labels of specific examples, we flip this framing to allow agents to ask for examples from specific categories. Using this framing, we introduce an active sampling method that asks for examples from the tail of the data distribution and show that it outperforms classical active learning methods on Visual Genome.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源