论文标题
改善稀疏连接的神经网络
Improving Neural Network with Uniform Sparse Connectivity
论文作者
论文摘要
神经网络构成了深度学习和众多AI应用的基础。古典神经网络已完全连接,训练和容易拟合的昂贵。稀疏的网络倾向于进行结构复杂的结构搜索,次优性能和使用有限。我们提出了新型均匀的稀疏网络(USN),每一层内具有均匀且稀疏的连接性。 USN具有一个惊人的属性,即其性能独立于实质性的拓扑变化和巨大的模型空间,因此为神经网络的所有上述问题提供了无搜索的解决方案。 USN始终如一地以预测准确性,速度和鲁棒性优于最先进的稀疏网络模型。它甚至比仅具有0.55%参数和1/4计算时间和资源的完全连接网络的预测准确性更高。重要的是,USN在概念上很简单,因为完全连接的网络具有多种改进,鲁棒性和可扩展性的自然概括。 USN可以在一系列应用程序,数据类型和深度学习体系结构中替换后者。我们已经在https://github.com/datapplab/sparsenet上进行了USN开源。
Neural network forms the foundation of deep learning and numerous AI applications. Classical neural networks are fully connected, expensive to train and prone to overfitting. Sparse networks tend to have convoluted structure search, suboptimal performance and limited usage. We proposed the novel uniform sparse network (USN) with even and sparse connectivity within each layer. USN has one striking property that its performance is independent of the substantial topology variation and enormous model space, thus offers a search-free solution to all above mentioned issues of neural networks. USN consistently and substantially outperforms the state-of-the-art sparse network models in prediction accuracy, speed and robustness. It even achieves higher prediction accuracy than the fully connected network with only 0.55% parameters and 1/4 computing time and resources. Importantly, USN is conceptually simple as a natural generalization of fully connected network with multiple improvements in accuracy, robustness and scalability. USN can replace the latter in a range of applications, data types and deep learning architectures. We have made USN open source at https://github.com/datapplab/sparsenet.