论文标题

将许多门票分为一张:稀疏神经网络培训的绩效促进

Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training

论文作者

Yin, Lu, Menkovski, Vlado, Fang, Meng, Huang, Tianjin, Pei, Yulong, Pechenizkiy, Mykola, Mocanu, Decebal Constantin, Liu, Shiwei

论文摘要

关于稀疏神经网络训练(稀疏训练)的最新研究表明,通过从头开始训练本质上稀疏的神经网络可以实现绩效和效率之间的令人信服的权衡。现有的稀疏训练方法通常努力在一次跑步中找到最佳的稀疏子网,而不会涉及任何昂贵的密集或训练步骤。例如,动态稀疏训练(DST)能够通过在训练过程中迭代发展稀疏的拓扑来实现密集训练的竞争性能。在本文中,我们认为最好分配有限的资源来创建多个低损失的稀疏子网并将其超级置于更强的基因,而不是完全分配所有资源以找到单个子网络。为了实现这一目标,需要两个Desiderata:(1)在一个培训过程中有效生产许多低损失的子网,即所谓的廉价门票,仅限于茂密培训的标准培训时间; (2)有效地将这些廉价的门票超级为一个更强的子网。为了证实我们的猜想,我们提出了一种新型的稀疏训练方法,称为SUP票,可以在单个稀疏到较小的训练过程中同时满足上述两个desiderata。在CIFAR-10/100和Imagenet上的各种现代体系结构中,我们表明,Suptickets与现有的稀疏训练方法无缝集成,并显示出一致的性能提高。

Recent works on sparse neural network training (sparse training) have shown that a compelling trade-off between performance and efficiency can be achieved by training intrinsically sparse neural networks from scratch. Existing sparse training methods usually strive to find the best sparse subnetwork possible in one single run, without involving any expensive dense or pre-training steps. For instance, dynamic sparse training (DST), is capable of reaching a competitive performance of dense training by iteratively evolving the sparse topology during the course of training. In this paper, we argue that it is better to allocate the limited resources to create multiple low-loss sparse subnetworks and superpose them into a stronger one, instead of allocating all resources entirely to find an individual subnetwork. To achieve this, two desiderata are required: (1) efficiently producing many low-loss subnetworks, the so-called cheap tickets, within one training process limited to the standard training time used in dense training; (2) effectively superposing these cheap tickets into one stronger subnetwork. To corroborate our conjecture, we present a novel sparse training approach, termed Sup-tickets, which can satisfy the above two desiderata concurrently in a single sparse-to-sparse training process. Across various modern architectures on CIFAR-10/100 and ImageNet, we show that Sup-tickets integrates seamlessly with the existing sparse training methods and demonstrates consistent performance improvement.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源