论文标题
使用背景知识来完善神经网络预测
Refining neural network predictions using background knowledge
论文作者
论文摘要
最近的工作表明,逻辑背景知识可用于学习系统,以弥补缺乏标记的培训数据。许多方法通过创建编码这一知识的损失函数来起作用。但是,即使在测试时间仍然有用,逻辑通常会在训练后丢弃。取而代之的是,我们通过通过额外的计算步骤来完善预测来确保神经网络预测能够满足知识。我们引入了可区分的改进功能,可以找到接近原始预测的校正预测。我们研究如何有效,有效地计算这些完善功能。使用一种称为迭代局部改进(ILR)的新算法,我们结合了改进函数,以找到任何复杂性的逻辑公式的精致预测。 ILR在复杂的SAT公式上发现了更少的迭代术进行改进,并且经常发现梯度下降无法进行的解决方案。最后,ILR在MNIST加法任务中产生竞争成果。
Recent work has shown logical background knowledge can be used in learning systems to compensate for a lack of labeled training data. Many methods work by creating a loss function that encodes this knowledge. However, often the logic is discarded after training, even if it is still useful at test time. Instead, we ensure neural network predictions satisfy the knowledge by refining the predictions with an extra computation step. We introduce differentiable refinement functions that find a corrected prediction close to the original prediction. We study how to effectively and efficiently compute these refinement functions. Using a new algorithm called Iterative Local Refinement (ILR), we combine refinement functions to find refined predictions for logical formulas of any complexity. ILR finds refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not. Finally, ILR produces competitive results in the MNIST addition task.