论文标题
开放集识别的基于单VS的基于网络的深度概率模型
One-vs-Rest Network-based Deep Probability Model for Open Set Recognition
论文作者
论文摘要
在训练过程中看不见的未知示例经常出现在现实世界的计算机视觉任务中,并且智能的自学系统应该能够区分已知和未知示例。解决此问题的开放式识别已有大约十年了。但是,基于深神经网络(DNN)的常规开放式识别方法缺乏后识别评分分析的基础。在本文中,我们提出了一种DNN结构,其中多个单VS-REST网络遵循卷积神经网络特征提取器。一个单VS-REST网络,该网络由隐藏层的整流线性单位激活函数和单个Sigmoid目标类输出节点组成,可以最大程度地提高从非匹配示例中学习信息的能力。此外,该网络产生了一个复杂的非线性特征 - 输出映射,该映射可以在功能空间中解释。通过引入极值基于理论的校准技术,非线性和可解释的映射提供了一个良好的类成员资格概率模型。我们的实验表明,与常用的SoftMax层相比,单VS-REST网络可以为未知示例提供更多信息的隐藏表示形式。此外,提出的概率模型在开放式分类方案中的表现优于最先进的方法。
Unknown examples that are unseen during training often appear in real-world computer vision tasks, and an intelligent self-learning system should be able to differentiate between known and unknown examples. Open set recognition, which addresses this problem, has been studied for approximately a decade. However, conventional open set recognition methods based on deep neural networks (DNNs) lack a foundation for post recognition score analysis. In this paper, we propose a DNN structure in which multiple one-vs-rest sigmoid networks follow a convolutional neural network feature extractor. A one-vs-rest network, which is composed of rectified linear unit activation functions for the hidden layers and a single sigmoid target class output node, can maximize the ability to learn information from nonmatch examples. Furthermore, the network yields a sophisticated nonlinear features-to-output mapping that is explainable in the feature space. By introducing extreme value theory-based calibration techniques, the nonlinear and explainable mapping provides a well-grounded class membership probability models. Our experiments show that one-vs-rest networks can provide more informative hidden representations for unknown examples than the commonly used SoftMax layer. In addition, the proposed probability model outperformed the state-of-the art methods in open set classification scenarios.