论文标题

内省学习:神经网络中推断的两阶段方法

Introspective Learning : A Two-Stage Approach for Inference in Neural Networks

论文作者

Prabhushankar, Mohit, AlRegib, Ghassan

论文摘要

在本文中,我们在神经网络的决策过程中提倡两个阶段。第一个是现有的前馈推理框架,其中感知给定数据中的模式并与先前学习的模式相关联。第二阶段是一个较慢的反射阶段,我们要求网络通过考虑和评估所有可用选择来反思其前馈决策。一起,我们将这两个阶段称为内省学习。我们使用训练有素的神经网络的梯度来测量这种反射。简单的三层多层感知器被用作基于所有提取梯度特征预测的第二阶段。我们感知地从两个阶段可视化事后解释,以提供内省的视觉接地。对于识别的应用,我们表明,在推广到嘈杂的数据时,内省网络的鲁棒性更高4%,容易发生的校准误差少42%。我们还说明了内省网络在下游任务中的价值,这些任务需要普遍性和校准,包括主动学习,分布外检测和不确定性估计。最后,我们将提议的机器内省置于人类内省的应用,以应用图像质量评估。

In this paper, we advocate for two stages in a neural network's decision making process. The first is the existing feed-forward inference framework where patterns in given data are sensed and associated with previously learned patterns. The second stage is a slower reflection stage where we ask the network to reflect on its feed-forward decision by considering and evaluating all available choices. Together, we term the two stages as introspective learning. We use gradients of trained neural networks as a measurement of this reflection. A simple three-layered Multi Layer Perceptron is used as the second stage that predicts based on all extracted gradient features. We perceptually visualize the post-hoc explanations from both stages to provide a visual grounding to introspection. For the application of recognition, we show that an introspective network is 4% more robust and 42% less prone to calibration errors when generalizing to noisy data. We also illustrate the value of introspective networks in downstream tasks that require generalizability and calibration including active learning, out-of-distribution detection, and uncertainty estimation. Finally, we ground the proposed machine introspection to human introspection for the application of image quality assessment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源