论文标题
通过投机解码从变压器快速推断
Fast Inference from Transformers via Speculative Decoding
论文作者
论文摘要
来自变形金刚(例如变形金刚)的大型自回旋模型的推断很慢 - 解码k令牌k串行模型。在这项工作中,我们引入了投机解码 - 一种算法,通过并行计算几个令牌,从自回归模型中进行采样速度,而不会对输出进行任何更改。 At the heart of our approach lie the observations that (1) hard language-modeling tasks often include easier subtasks that can be approximated well by more efficient models, and (2) using speculative execution and a novel sampling method, we can make exact decoding from the large models faster, by running them in parallel on the outputs of the approximation models, potentially generating several tokens concurrently, and without changing the distribution.我们的方法可以加速现有的现成模型,而无需重新培训或架构更改。我们在T5-XXL上证明了这一点,并且与标准T5X实现相比,显示出2x-3X的加速度,并具有相同的输出。
Inference from large autoregressive models like Transformers is slow - decoding K tokens takes K serial runs of the model. In this work we introduce speculative decoding - an algorithm to sample from autoregressive models faster without any changes to the outputs, by computing several tokens in parallel. At the heart of our approach lie the observations that (1) hard language-modeling tasks often include easier subtasks that can be approximated well by more efficient models, and (2) using speculative execution and a novel sampling method, we can make exact decoding from the large models faster, by running them in parallel on the outputs of the approximation models, potentially generating several tokens concurrently, and without changing the distribution. Our method can accelerate existing off-the-shelf models without retraining or architecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration compared to the standard T5X implementation, with identical outputs.