论文标题
网络是分布式培训的瓶颈吗?
Is Network the Bottleneck of Distributed Training?
论文作者
论文摘要
最近,关于提高分布式培训的沟通效率的研究激增。但是,几乎没有完成系统地了解网络是否是瓶颈以及在多大程度上的工作。 在本文中,我们采用了第一个原理方法来衡量和分析分布式培训的网络性能。正如预期的那样,我们的测量确认通信是阻止线性扩展分布式训练的组成部分。但是,与普遍的信念相反,我们发现网络在低利用率下运行,如果网络可以充分利用,则分布式培训可以达到接近一个的缩放系数。此外,虽然许多关于梯度压缩倡导者超过100倍压缩比的建议,但我们表明,在完整的网络利用率下,在100 GBPS网络中不需要梯度压缩。另一方面,像10 Gbps这样的较低速度网络仅需要2倍 - 5倍梯度压缩率即可实现几乎线性的扩展。与应用程序级技术(如梯度压缩)相比,网络级优化不需要更改应用程序,也不会损害训练有素的模型的性能。因此,我们提倡分布式培训的真正挑战是网络社区开发高性能网络运输以充分利用网络容量并实现线性扩展。
Recently there has been a surge of research on improving the communication efficiency of distributed training. However, little work has been done to systematically understand whether the network is the bottleneck and to what extent. In this paper, we take a first-principles approach to measure and analyze the network performance of distributed training. As expected, our measurement confirms that communication is the component that blocks distributed training from linear scale-out. However, contrary to the common belief, we find that the network is running at low utilization and that if the network can be fully utilized, distributed training can achieve a scaling factor of close to one. Moreover, while many recent proposals on gradient compression advocate over 100x compression ratio, we show that under full network utilization, there is no need for gradient compression in 100 Gbps network. On the other hand, a lower speed network like 10 Gbps requires only 2x--5x gradients compression ratio to achieve almost linear scale-out. Compared to application-level techniques like gradient compression, network-level optimizations do not require changes to applications and do not hurt the performance of trained models. As such, we advocate that the real challenge of distributed training is for the network community to develop high-performance network transport to fully utilize the network capacity and achieve linear scale-out.