论文标题

适应:Pytorch中近似DNN加速器的快速仿真

AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch

论文作者

Danopoulos, Dimitrios, Zervakis, Georgios, Siozios, Kostas, Soudris, Dimitrios, Henkel, Jörg

论文摘要

当前的最新最新乘坐近似乘数来满足DNN加速器的高度增加的功率需求。但是,由于缺乏足够的支持,在DNN框架中,评估近似DNN的准确性很麻烦。我们通过提出适应来解决这种效率低下,这是一个快速的仿真框架,扩展了Pytorch以支持近似推断以及近似感知的重新训练。适应可以无缝部署,并且与大多数DNN兼容。我们在多种DNN模型和应用领域(包括CNN,LSTMS和GAN)上评估了许多具有不同位低值的近似乘数。结果显示,相对于基线近似实施,从近似重新训练和减少推理时间降低至53.9倍,误差恢复。

Current state-of-the-art employs approximate multipliers to address the highly increased power demands of DNN accelerators. However, evaluating the accuracy of approximate DNNs is cumbersome due to the lack of adequate support for approximate arithmetic in DNN frameworks. We address this inefficiency by presenting AdaPT, a fast emulation framework that extends PyTorch to support approximate inference as well as approximation-aware retraining. AdaPT can be seamlessly deployed and is compatible with the most DNNs. We evaluate the framework on several DNN models and application fields including CNNs, LSTMs, and GANs for a number of approximate multipliers with distinct bitwidth values. The results show substantial error recovery from approximate re-training and reduced inference time up to 53.9x with respect to the baseline approximate implementation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源