论文标题

具有Lipschitz非线性的单个神经元模型的主动学习

Active Learning for Single Neuron Models with Lipschitz Non-Linearities

论文作者

Gajjar, Aarshvi, Hegde, Chinmay, Musco, Christopher

论文摘要

我们考虑到无害的环境(在对抗标签噪声下),我们考虑单个神经元模型的主动学习问题,有时也称为“山脊功能”。这种模型已被证明在建模物理现象以及为部分微分方程构建替代数据驱动的模型方面具有广泛的有效性。 令人惊讶的是,我们表明,对于具有任何Lipschitz非线性的单个神经元模型(例如relu,sigmoid,绝对价值,低度多项式等),可以使用拟定\ Emph {线性函数的良好活跃学习策略来获得强大的可证明近似保证,可以在Agnostic设置中获得。 % - 即没有非线性的情况。也就是说,我们可以通过统计\ emph {Leverage评分取样}收集样品,在其他活动的学习方案中,这已被证明是近乎最佳的。我们通过经验模拟来支持我们的理论结果,表明我们提出的基于杠杆评分取样(普通)均匀抽样的主动学习策略在拟合单个神经元模型时。

We consider the problem of active learning for single neuron models, also sometimes called ``ridge functions'', in the agnostic setting (under adversarial label noise). Such models have been shown to be broadly effective in modeling physical phenomena, and for constructing surrogate data-driven models for partial differential equations. Surprisingly, we show that for a single neuron model with any Lipschitz non-linearity (such as the ReLU, sigmoid, absolute value, low-degree polynomial, among others), strong provable approximation guarantees can be obtained using a well-known active learning strategy for fitting \emph{linear functions} in the agnostic setting. % -- i.e. for the case when there is no non-linearity. Namely, we can collect samples via statistical \emph{leverage score sampling}, which has been shown to be near-optimal in other active learning scenarios. We support our theoretical results with empirical simulations showing that our proposed active learning strategy based on leverage score sampling outperforms (ordinary) uniform sampling when fitting single neuron models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源