论文标题

对单级分类的元学习,使用订单等级网络很少有示例

Meta-Learning for One-Class Classification with Few Examples using Order-Equivariant Network

论文作者

Oladosu, Ademola, Xu, Tony, Ekfeldt, Philip, Kelly, Brian A., Cranmer, Miles, Ho, Shirley, Price-Whelan, Adrian M., Contardo, Gabriella

论文摘要

本文提出了一个元学习框架,用于在测试时间内进行几级单级分类(OCC),该设置仅适用于正类别,并且没有给出负面示例的监督。我们认为,我们有一组“一级分类”的目标任务,只有一小部分积极的示例可用于每个任务,以及一套带有全面监督的培训任务(即高度不平衡的分类)。我们提出了一种使用订单等值网络学习“元”二进制分类器的方法。该模型将以输入为示例,以从给定的任务进行分类,以及该OCC任务的相应监督的积极示例集。因此,模型的输出将在给定任务的可用阳性示例上“条件”,从而可以预测新任务和新示例,而无需标记为负面示例。在本文中,我们是出于天文学应用的动机。我们的目标是确定星星是否属于特定的恒星组(给定任务的“单级”),称为\ textit {stellar streams},其中每个恒星流都是不同的OCC任务。我们表明,我们的方法在看不见的(测试)合成流方面很好地传输,并且即使没有重新训练,并且访问每个任务的数据中的一个较小部分即可预测(只有积极的监督),也要胜过基准。但是,我们看到它在实际流GD-1上的传输不太好。这可能来自与合成和真实流的内在差异,强调了该方法对任务的“性质”一致性的需求。但是,轻巧的微调可以提高性能并超越我们的基准。我们的实验显示了令人鼓舞的结果,以进一步探索OCC任务的元学习方法。

This paper presents a meta-learning framework for few-shots One-Class Classification (OCC) at test-time, a setting where labeled examples are only available for the positive class, and no supervision is given for the negative example. We consider that we have a set of `one-class classification' objective-tasks with only a small set of positive examples available for each task, and a set of training tasks with full supervision (i.e. highly imbalanced classification). We propose an approach using order-equivariant networks to learn a 'meta' binary-classifier. The model will take as input an example to classify from a given task, as well as the corresponding supervised set of positive examples for this OCC task. Thus, the output of the model will be 'conditioned' on the available positive example of a given task, allowing to predict on new tasks and new examples without labeled negative examples. In this paper, we are motivated by an astronomy application. Our goal is to identify if stars belong to a specific stellar group (the 'one-class' for a given task), called \textit{stellar streams}, where each stellar stream is a different OCC-task. We show that our method transfers well on unseen (test) synthetic streams, and outperforms the baselines even though it is not retrained and accesses a much smaller part of the data per task to predict (only positive supervision). We see however that it doesn't transfer as well on the real stream GD-1. This could come from intrinsic differences from the synthetic and real stream, highlighting the need for consistency in the 'nature' of the task for this method. However, light fine-tuning improve performances and outperform our baselines. Our experiments show encouraging results to further explore meta-learning methods for OCC tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源