论文标题

$σ^2 $ r损失:使用sigmoidal功能通过乘法因子加权损失

$σ^2$R Loss: a Weighted Loss by Multiplicative Factors using Sigmoidal Functions

论文作者

La Grassa, Riccardo, Gallo, Ignazio, Landro, Nicola

论文摘要

在神经网络中,损耗函数代表了学习过程的核心,该过程将优化器导致最佳收敛误差的近似值。卷积神经网络(CNN)使用损失函数作为监督信号来训练深层模型,并为在人工视力的某些领域中实现最新技术做出了重大贡献。横向渗透和中心损失功能通常用于增加学习功能的区分能力并增加模型的概括性能。中心损失最大程度地减少了类内的差异,同时惩罚了每个班级内部深度特征之间的长距离。但是,中心损失的总误差将受到大多数实例的严重影响,并可能导致阶层内差异的冻结状态。为了解决这个问题,我们引入了一种称为Sigma平方减少损失($σ^2 $ R损失)的新损失函数,该功能由Sigmoid函数调节,以使每个实例的误差膨胀/降低,然后继续减少阶层内差异。我们的损失具有明确的直觉和几何解释,此外,我们通过实验证明了我们在几个基准数据集上提出提案的有效性,显示了阶段内差异的降低,并通过中心损失和柔软的最近邻居功能克服了结果。

In neural networks, the loss function represents the core of the learning process that leads the optimizer to an approximation of the optimal convergence error. Convolutional neural networks (CNN) use the loss function as a supervisory signal to train a deep model and contribute significantly to achieving the state of the art in some fields of artificial vision. Cross-entropy and Center loss functions are commonly used to increase the discriminating power of learned functions and increase the generalization performance of the model. Center loss minimizes the class intra-class variance and at the same time penalizes the long distance between the deep features inside each class. However, the total error of the center loss will be heavily influenced by the majority of the instances and can lead to a freezing state in terms of intra-class variance. To address this, we introduce a new loss function called sigma squared reduction loss ($σ^2$R loss), which is regulated by a sigmoid function to inflate/deflate the error per instance and then continue to reduce the intra-class variance. Our loss has clear intuition and geometric interpretation, furthermore, we demonstrate by experiments the effectiveness of our proposal on several benchmark datasets showing the intra-class variance reduction and overcoming the results obtained with center loss and soft nearest neighbour functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源