论文标题
对抗性局灶性损失:询问歧视者辛苦例子
Adversarial Focal Loss: Asking Your Discriminator for Hard Examples
论文作者
论文摘要
焦点损失已达到了令人难以置信的知名度,因为它使用一种简单的技术来识别和利用硬性示例来在分类方面取得更好的性能。但是,此方法不容易在分类任务之外概括,例如在KePoint检测中。在本文中,我们提出了对焦点检测任务的焦点损失的新颖适应,称为对抗局灶性损失(AFL)。 AFL不仅在语义上类似于焦点损失,而且还可以作为任意损失功能的插头升级。尽管焦点损失需要分类器的输出,但AFL利用单独的对抗网络来为每个输入产生难度分数。然后,即使在没有分类器的情况下,也可以将这种困难得分用于动态优先级学习。在这项工作中,我们展示了AFL在增强关键点检测中现有方法的有效性,并验证了其根据难度重新提交示例的能力。
Focal Loss has reached incredible popularity as it uses a simple technique to identify and utilize hard examples to achieve better performance on classification. However, this method does not easily generalize outside of classification tasks, such as in keypoint detection. In this paper, we propose a novel adaptation of Focal Loss for keypoint detection tasks, called Adversarial Focal Loss (AFL). AFL not only is semantically analogous to Focal loss, but also works as a plug-and-chug upgrade for arbitrary loss functions. While Focal Loss requires output from a classifier, AFL leverages a separate adversarial network to produce a difficulty score for each input. This difficulty score can then be used to dynamically prioritize learning on hard examples, even in absence of a classifier. In this work, we show AFL's effectiveness in enhancing existing methods in keypoint detection and verify its capability to re-weigh examples based on difficulty.