论文标题
Energonai:1000亿参数变压器模型的推理系统
EnergonAI: An Inference System for 10-100 Billion Parameter Transformer Models
论文作者
论文摘要
大型变压器模型在各种自然语言处理(NLP)任务上显示出令人鼓舞的性能。尽管AI社区将模型量表扩展到了万亿个参数级别,但由于潜伏期,吞吐量和内存约束,仍不确定100亿个参数模型的实际部署。 在本文中,我们提出了Energonai,以解决在单或多GPU系统上有效部署1000亿参数变压器模型的挑战。 Energonai采用层次结构控制器系统体系结构来协调多个设备并有效支持不同的并行模式。它将子模型的执行方式委托给单个控制器样式的多个工人,并以多控制器样式的工人之间的张量并行性和管道并行性。在新的结构上,我们提出了三种技术,即非阻滞管道并行性,分布式冗余计算消除和同伴记忆池。 Energonai使用户能够编程复杂的并行代码与串行编码相同。与FertransFormer相比,我们已经证明,Energonai在延迟和吞吐量方面具有较高的性能。在我们的实验中,Energonai可以在张量并行性中降低37%的潜伏期,在管道并行性的10%可伸缩性提高,并通过以有限的性能降低的成本使用较大的异质记忆空间来改善单个GPU推断的模型量表。
Large transformer models display promising performance on a wide range of natural language processing (NLP) tasks. Although the AI community has expanded the model scale to the trillion parameter level, the practical deployment of 10-100 billion parameter models is still uncertain due to the latency, throughput, and memory constraints. In this paper, we proposed EnergonAI to solve the challenges of the efficient deployment of 10-100 billion parameter transformer models on single- or multi-GPU systems. EnergonAI adopts a hierarchy-controller system architecture to coordinate multiple devices and efficiently support different parallel patterns. It delegates the execution of sub-models to multiple workers in the single-controller style and applies tensor parallelism and pipeline parallelism among the workers in a multi-controller style. Upon the novel architecture, we propose three techniques, i.e. non-blocking pipeline parallelism, distributed redundant computation elimination, and peer memory pooling. EnergonAI enables the users to program complex parallel code the same as a serial one. Compared with the FasterTransformer, we have proven that EnergonAI has superior performance on latency and throughput. In our experiments, EnergonAI can achieve 37% latency reduction in tensor parallelism, 10% scalability improvement in pipeline parallelism, and it improves the model scale inferred on a single GPU by using a larger heterogeneous memory space at cost of limited performance reduction.