论文标题
深度回归合奏
Deep Regression Ensembles
论文作者
论文摘要
我们介绍了一种设计和训练深神经网络(DNN)的方法,我们称之为“深度回归集合”(DRE)。它弥合了DNN和两层神经网络之间的差距,这些神经网络训练有随机特征回归。 DRE的每一层都有两个组件,即使用线性脊回归的近视训练(好像是最终的输出层)随机绘制的输入权重和输出权重。在一层中,每个神经元使用不同的输入子集和不同的脊惩罚,构成了随机特征脊回归的合奏。我们的实验表明,在许多数据集中,单个DRE架构与最先进的DNN相提并论。但是,由于DRE神经重量是在封闭形式中已知的,要么是随机绘制的,因此其计算成本是小于DNN的数量级。
We introduce a methodology for designing and training deep neural networks (DNN) that we call "Deep Regression Ensembles" (DRE). It bridges the gap between DNN and two-layer neural networks trained with random feature regression. Each layer of DRE has two components, randomly drawn input weights and output weights trained myopically (as if the final output layer) using linear ridge regression. Within a layer, each neuron uses a different subset of inputs and a different ridge penalty, constituting an ensemble of random feature ridge regressions. Our experiments show that a single DRE architecture is at par with or exceeds state-of-the-art DNN in many data sets. Yet, because DRE neural weights are either known in closed-form or randomly drawn, its computational cost is orders of magnitude smaller than DNN.