论文标题
HEAM:高效率近似乘数优化,用于深神经网络
HEAM: High-Efficiency Approximate Multiplier Optimization for Deep Neural Networks
论文作者
论文摘要
我们为自动设计近似乘数设计了一种优化方法,该方法根据操作数分布将平均误差降至最低。我们的乘数比最佳再生产DNN的近似乘数高出50.24%,其面积小15.76%,功耗减少25.05%,延迟延迟3.50%。与精确的乘数相比,我们的乘数分别将面积,功耗和延迟降低44.94%,47.63%和16.78%,精度损失可忽略不计。与原始模块相比,使用我们的乘数的测试DNN加速器模块可获得高达18.70%的面积,功耗少9.99%。
We propose an optimization method for the automatic design of approximate multipliers, which minimizes the average error according to the operand distributions. Our multiplier achieves up to 50.24% higher accuracy than the best reproduced approximate multiplier in DNNs, with 15.76% smaller area, 25.05% less power consumption, and 3.50% shorter delay. Compared with an exact multiplier, our multiplier reduces the area, power consumption, and delay by 44.94%, 47.63%, and 16.78%, respectively, with negligible accuracy losses. The tested DNN accelerator modules with our multiplier obtain up to 18.70% smaller area and 9.99% less power consumption than the original modules.