论文标题
神经符号自然逻辑,具有自然语言推断的内省修订
Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference
论文作者
论文摘要
我们介绍了一个基于内省的修订基于强化学习的神经符号自然逻辑框架。该模型样本并通过政策梯度奖励特定的推理路径,其中内省的修订算法修改了中间的符号推理步骤,以发现奖励赚取奖励行动,并利用外部知识来减轻虚假的推理和培训效率低下。该框架由正确设计的本地关系模型支持,以避免输入纠缠,这有助于确保证明路径的解释性。与现有数据集中的先前模型相比,所提出的模型具有内置的可解释性,并显示出单调推断,系统概括和解释性的卓越能力。
We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision. The model samples and rewards specific reasoning paths through policy gradient, in which the introspective revision algorithm modifies intermediate symbolic reasoning steps to discover reward-earning operations as well as leverages external knowledge to alleviate spurious reasoning and training inefficiency. The framework is supported by properly designed local relation models to avoid input entangling, which helps ensure the interpretability of the proof paths. The proposed model has built-in interpretability and shows superior capability in monotonicity inference, systematic generalization, and interpretability, compared to previous models on the existing datasets.