论文标题
深层解释的合奏
Deep interpretable ensembles
论文作者
论文摘要
集合通过从多个模型中汇总预测来提高预测性能并允许不确定性定量。在深度结合中,各个模型通常是黑匣子神经网络,或者最近是可解释的半结构深度转换模型。但是,合奏成员的解释性通常在汇总时失去。这是高风险决策领域中深层合奏的关键缺点,其中需要可解释的模型。我们提出了一个新型的转化集合,该集合汇总了概率预测,并保证保留可解释性,并比平均成员平均得出更好的预测。转换合奏是针对可解释的深层转化模型量身定制的,但适用于更广泛的概率神经网络。在对几个公开数据集的实验中,我们证明了转换合奏在预测性能,歧视和校准方面与经典深层合奏相同。此外,我们证明了转化集合如何量化质地和认知不确定性,并在某些条件下产生最小值的最佳预测。
Ensembles improve prediction performance and allow uncertainty quantification by aggregating predictions from multiple models. In deep ensembling, the individual models are usually black box neural networks, or recently, partially interpretable semi-structured deep transformation models. However, interpretability of the ensemble members is generally lost upon aggregation. This is a crucial drawback of deep ensembles in high-stake decision fields, in which interpretable models are desired. We propose a novel transformation ensemble which aggregates probabilistic predictions with the guarantee to preserve interpretability and yield uniformly better predictions than the ensemble members on average. Transformation ensembles are tailored towards interpretable deep transformation models but are applicable to a wider range of probabilistic neural networks. In experiments on several publicly available data sets, we demonstrate that transformation ensembles perform on par with classical deep ensembles in terms of prediction performance, discrimination, and calibration. In addition, we demonstrate how transformation ensembles quantify both aleatoric and epistemic uncertainty, and produce minimax optimal predictions under certain conditions.