论文标题
在具有符号回归的神经网络中探索隐藏的语义
Exploring Hidden Semantics in Neural Networks with Symbolic Regression
论文作者
论文摘要
许多最近的研究着重于开发机制来解释神经网络(NNS)的黑盒行为。但是,几乎没有做过提取神经网络潜在的隐藏语义(数学表示)的工作。 NN模型的简洁明了的数学表示可以提高对其行为的理解和解释。为了满足这一需求,我们提出了一种新颖的神经作品符号回归方法(称为SRNET),以发现NN的数学表达。 SRNET创建了笛卡尔遗传编程(NNCGP),以表示NN中单层的隐藏语义。然后,它利用多染色体NNCGP表示NN所有层的隐藏语义。该方法使用(1+$λ$)进化策略(称为MNNCGP-Es)来提取NN中所有层的最终数学表达式。对12个符号回归基准和5个分类基准的实验表明,SRNET不仅可以揭示NN的每一层之间的复杂关系,而且还可以提取整个NN的数学表示。与石灰和枫木相比,SRNET具有更高的插值精度和趋势,可以近似实际数据集中的真实模型。
Many recent studies focus on developing mechanisms to explain the black-box behaviors of neural networks (NNs). However, little work has been done to extract the potential hidden semantics (mathematical representation) of a neural network. A succinct and explicit mathematical representation of a NN model could improve the understanding and interpretation of its behaviors. To address this need, we propose a novel symbolic regression method for neural works (called SRNet) to discover the mathematical expressions of a NN. SRNet creates a Cartesian genetic programming (NNCGP) to represent the hidden semantics of a single layer in a NN. It then leverages a multi-chromosome NNCGP to represent hidden semantics of all layers of the NN. The method uses a (1+$λ$) evolutionary strategy (called MNNCGP-ES) to extract the final mathematical expressions of all layers in the NN. Experiments on 12 symbolic regression benchmarks and 5 classification benchmarks show that SRNet not only can reveal the complex relationships between each layer of a NN but also can extract the mathematical representation of the whole NN. Compared with LIME and MAPLE, SRNet has higher interpolation accuracy and trends to approximate the real model on the practical dataset.