论文标题
通过修复来学习:解决数学单词问题和弱监督
Learning by Fixing: Solving Math Word Problems with Weak Supervision
论文作者
论文摘要
以前的数学单词问题神经求解器(MWP)是通过全面监督学习的,无法产生各种解决方案。在本文中,我们通过引入\ textIt {弱保护}范式来解决此问题。我们的方法仅需要最终答案的注释,并且可以为单个问题生成各种解决方案。为了提高弱监督的学习,我们提出了一种新颖的\ textit {by-fixing}(lbf)框架,该框架通过符号推理来纠正神经网络的误解。具体而言,对于神经网络生成的不正确的解决方案树,\ textIt {fixing}机制将误差从根节点传播到叶节点,并渗透最可能的修复程序,可以执行以获取所需的答案。为了生成更多样化的解决方案,应用\ textIt {树正则化}来指导解决方案空间的有效收缩和探索,并且A \ textIt {Memory Buffer}旨在跟踪和保存每个问题的各种修复程序。 MATH23K数据集的实验结果表明,所提出的LBF框架在弱监督的学习中显着优于强化学习基线。此外,它比完全监督的方法获得了可比的TOP-1和更好的TOP-3/5答案精确度,这表明其在生产各种溶液中的强度。
Previous neural solvers of math word problems (MWPs) are learned with full supervision and fail to generate diverse solutions. In this paper, we address this issue by introducing a \textit{weakly-supervised} paradigm for learning MWPs. Our method only requires the annotations of the final answers and can generate various solutions for a single problem. To boost weakly-supervised learning, we propose a novel \textit{learning-by-fixing} (LBF) framework, which corrects the misperceptions of the neural network via symbolic reasoning. Specifically, for an incorrect solution tree generated by the neural network, the \textit{fixing} mechanism propagates the error from the root node to the leaf nodes and infers the most probable fix that can be executed to get the desired answer. To generate more diverse solutions, \textit{tree regularization} is applied to guide the efficient shrinkage and exploration of the solution space, and a \textit{memory buffer} is designed to track and save the discovered various fixes for each problem. Experimental results on the Math23K dataset show the proposed LBF framework significantly outperforms reinforcement learning baselines in weakly-supervised learning. Furthermore, it achieves comparable top-1 and much better top-3/5 answer accuracies than fully-supervised methods, demonstrating its strength in producing diverse solutions.