论文标题
神经网络的深度不确定性
Depth Uncertainty in Neural Networks
论文作者
论文摘要
估计深度学习中不确定性的现有方法倾向于需要多个前向通行证,这使得它们不适合计算资源有限的应用。为了解决这个问题,我们在神经网络的深度上执行概率推理。不同的深度对应于共享权重并通过边缘化将其预测组合在一起的子网,从而产生模型不确定性。通过利用馈电网络的顺序结构,我们能够评估我们的训练目标并单次向前进行预测。我们验证了现实世界回归和图像分类任务的方法。我们的方法提供了不确定性校准,对数据集移位的鲁棒性以及具有更多计算昂贵基线的精确度竞争。
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited. To solve this, we perform probabilistic reasoning over the depth of neural networks. Different depths correspond to subnetworks which share weights and whose predictions are combined via marginalisation, yielding model uncertainty. By exploiting the sequential structure of feed-forward networks, we are able to both evaluate our training objective and make predictions with a single forward pass. We validate our approach on real-world regression and image classification tasks. Our approach provides uncertainty calibration, robustness to dataset shift, and accuracies competitive with more computationally expensive baselines.