论文标题
通过准确性与不确定性优化改进模型校准
Improving model calibration with accuracy versus uncertainty optimization
论文作者
论文摘要
从深度神经网络中获得可靠,准确的不确定性估计值,对安全至关重要的应用很重要。当它确定其预测时,应进行良好的模型应该是准确的,并且表明何时可能不准确。不确定性校准是一个具有挑战性的问题,因为没有可用于不确定性估计的基础真相。我们提出了一种优化方法,该方法利用准确性和不确定性之间的关系作为不确定性校准的锚点。我们引入了可区分的精度与不确定性校准(AVUC)损耗函数,该功能允许模型学会提供精心校准的不确定性,此外精度提高了。我们还证明了相同的方法可以扩展到验证模型的事后不确定性校准。我们用平均场随机变异推理说明了我们的方法,并与最先进的方法进行了比较。广泛的实验证明了我们的方法比在分布变化下的大规模图像分类任务上的现有方法产生更好的模型校准。
Obtaining reliable and accurate quantification of uncertainty estimates from deep neural networks is important in safety-critical applications. A well-calibrated model should be accurate when it is certain about its prediction and indicate high uncertainty when it is likely to be inaccurate. Uncertainty calibration is a challenging problem as there is no ground truth available for uncertainty estimates. We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration. We introduce a differentiable accuracy versus uncertainty calibration (AvUC) loss function that allows a model to learn to provide well-calibrated uncertainties, in addition to improved accuracy. We also demonstrate the same methodology can be extended to post-hoc uncertainty calibration on pretrained models. We illustrate our approach with mean-field stochastic variational inference and compare with state-of-the-art methods. Extensive experiments demonstrate our approach yields better model calibration than existing methods on large-scale image classification tasks under distributional shift.