论文标题

在多维随机子集总和问题上

On the Multidimensional Random Subset Sum Problem

论文作者

Becchetti, Luca, da Cunha, Arthur Carvalho Walraven, Clementi, Andrea, d'Amore, Francesco, Lesfari, Hicham, Natale, Emanuele, Trevisan, Luca

论文摘要

在随机子集总和问题中,给定$ n $ i.i.d.随机变量$ x_1,...,x_n $,我们希望将[-1,1] $中的任何点$ z \作为合适的子集$ x_ {i_1(z)},...,x__ {i_s(z(z)} $的总和,最多可达错误$ \ varepsilon $。尽管有简单的陈述,但这个问题还是理论计算机科学和统计力学的基本兴趣。最近,它因其在人工神经网络理论中的影响而引起了人们的重新关注。该问题的一个明显的多维概括是考虑$ n $ i.i.d。 $ d $ - 二维随机向量,其目的是近似[-1,1]^d $ in [-1,1]^d $的每个点$ \ mathbf {z} \。 G. S. Lueker在1998年表明,在一维环境中,$ n = \ natercal {o}(\ log \ frac 1 \ varepsilon)$ samples $ samples $ samples保证具有很高概率的近似属性。在这项工作中,我们证明,在$ d $ d $ d $中,$ n = \ n = \ mathc frac frac frofs for fo \ for fof fof fof fo \ fo fof fof fof fo \ fof fof fof fof fof fof fof fo \ fofor fof fofor fof fofor fof fof。 \ cdot(\ log \ frac 1 \ varepsilon + \ log d))$样本足以使近似属性具有很高的概率。作为一种强调该结果潜在兴趣的应用程序,我们证明了最近提出的神经网络模型表现出普遍性:具有较高的概率,该模型可以在参数数量中近似多项式开销中的任何神经网络。

In the Random Subset Sum Problem, given $n$ i.i.d. random variables $X_1, ..., X_n$, we wish to approximate any point $z \in [-1,1]$ as the sum of a suitable subset $X_{i_1(z)}, ..., X_{i_s(z)}$ of them, up to error $\varepsilon$. Despite its simple statement, this problem is of fundamental interest to both theoretical computer science and statistical mechanics. More recently, it gained renewed attention for its implications in the theory of Artificial Neural Networks. An obvious multidimensional generalisation of the problem is to consider $n$ i.i.d. $d$-dimensional random vectors, with the objective of approximating every point $\mathbf{z} \in [-1,1]^d$. In 1998, G. S. Lueker showed that, in the one-dimensional setting, $n=\mathcal{O}(\log \frac 1\varepsilon)$ samples guarantee the approximation property with high probability.In this work, we prove that, in $d$ dimensions, $n = \mathcal{O}(d^3\log \frac 1\varepsilon \cdot (\log \frac 1\varepsilon + \log d))$ samples suffice for the approximation property to hold with high probability. As an application highlighting the potential interest of this result, we prove that a recently proposed neural network model exhibits universality: with high probability, the model can approximate any neural network within a polynomial overhead in the number of parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源