论文标题
关于资源有效的贝叶斯网络分类器和深层神经网络
On Resource-Efficient Bayesian Network Classifiers and Deep Neural Networks
论文作者
论文摘要
我们提出了两种方法来降低贝叶斯网络(BN)分类器的复杂性。首先,我们使用直线梯度估计器引入量化感知训练,以将BN的参数量化为几个位。其次,我们通过考虑模型尺寸来扩展最近提出的可区分的树木增强的幼稚贝叶斯(TAN)结构学习方法。这两种方法都是由深度学习社区的最新发展激发的,它们提供了有效的手段来在模型规模和预测准确性之间进行权衡,这在广泛的实验中得到了证明。此外,我们将BN分类器与量化的深层神经网络(DNN)进行了对比,用于小规模的场景,而文献中几乎没有研究过。我们在模型大小,操作数量和测试错误方面显示了帕累托最佳模型,并发现两个模型类都是可行的选项。
We present two methods to reduce the complexity of Bayesian network (BN) classifiers. First, we introduce quantization-aware training using the straight-through gradient estimator to quantize the parameters of BNs to few bits. Second, we extend a recently proposed differentiable tree-augmented naive Bayes (TAN) structure learning approach by also considering the model size. Both methods are motivated by recent developments in the deep learning community, and they provide effective means to trade off between model size and prediction accuracy, which is demonstrated in extensive experiments. Furthermore, we contrast quantized BN classifiers with quantized deep neural networks (DNNs) for small-scale scenarios which have hardly been investigated in the literature. We show Pareto optimal models with respect to model size, number of operations, and test error and find that both model classes are viable options.