论文标题
可竞争的神经网络(附录)的因果发现和知识注入
Causal Discovery and Knowledge Injection for Contestable Neural Networks (with Appendices)
论文作者
论文摘要
事实证明,神经网络可以有效地解决机器学习任务,但是目前尚不清楚他们是否学习任何相关的因果关系,而它们的黑盒性质使建模者很难理解和调试它们。我们提出了一种新的方法,通过允许双向相互作用来克服这些问题,从而使神经网络授权的机器可以暴露基础学识渊博的因果图,并且人类可以通过修改因果图来对机器进行竞争,然后再将其重新注射到机器中。博学的模型可以保证符合图形并符合专家知识,其中一些也可以预先赋予。通过构建模型行为的窗口并实现知识注入,我们的方法允许从业者根据从数据发现的因果结构来调试网络并基于预测。使用真实和综合表格数据的实验表明,与SOTA正则化网络相比,我们的方法在产生输入层较小的副网络的同时,将预测性能提高到2.4倍。
Neural networks have proven to be effective at solving machine learning tasks but it is unclear whether they learn any relevant causal relationships, while their black-box nature makes it difficult for modellers to understand and debug them. We propose a novel method overcoming these issues by allowing a two-way interaction whereby neural-network-empowered machines can expose the underpinning learnt causal graphs and humans can contest the machines by modifying the causal graphs before re-injecting them into the machines. The learnt models are guaranteed to conform to the graphs and adhere to expert knowledge, some of which can also be given up-front. By building a window into the model behaviour and enabling knowledge injection, our method allows practitioners to debug networks based on the causal structure discovered from the data and underpinning the predictions. Experiments with real and synthetic tabular data show that our method improves predictive performance up to 2.4x while producing parsimonious networks, up to 7x smaller in the input layer, compared to SOTA regularised networks.