论文标题

AI/机器学习加速器,超级计算机和计算密集型应用程序中计算能量估算的趋势

Trends in Energy Estimates for Computing in AI/Machine Learning Accelerators, Supercomputers, and Compute-Intensive Applications

论文作者

Shankar, Sadasivan, Reuther, Albert

论文摘要

我们检查了过去十年来,我们研究了由几何规模定律驱动的不同系统的计算能量需求,并增加了人工智能或机器学习(AI-ML)的使用。通过基于数据驱动的发现的更多科学和技术应用,机器学习方法,尤其是深层神经网络已被广泛使用。为了启用此类应用程序,硬件加速器和高级AI-ML方法都导致了新的体系结构,系统设计,算法和软件的引入。我们对能量趋势的分析表明三个重要的观察结果:1)由于几何缩放而引起的能源效率正在减慢; 2)位级别的能源效率不能转化为指令级别的效率,或者在系统级别的各种系统中,尤其是用于大规模AI-ML加速器或超级计算机的效率; 3)在应用水平上,通用AI-ML方法可以是计算能量密集型的,从几何规模和特殊用途加速器中脱离了能量的利益。此外,我们的分析提供了将能源效率与绩效分析相结合的特定指针,以实现未来的高性能和可持续计算。

We examine the computational energy requirements of different systems driven by the geometrical scaling law, and increasing use of Artificial Intelligence or Machine Learning (AI-ML) over the last decade. With more scientific and technology applications based on data-driven discovery, machine learning methods, especially deep neural networks, have become widely used. In order to enable such applications, both hardware accelerators and advanced AI-ML methods have led to the introduction of new architectures, system designs, algorithms, and software. Our analysis of energy trends indicates three important observations: 1) Energy efficiency due to geometrical scaling is slowing down; 2) The energy efficiency at the bit-level does not translate into efficiency at the instruction-level, or at the system-level for a variety of systems, especially for large-scale AI-ML accelerators or supercomputers; 3) At the application level, general-purpose AI-ML methods can be computationally energy intensive, off-setting the gains in energy from geometrical scaling and special purpose accelerators. Further, our analysis provides specific pointers for integrating energy efficiency with performance analysis for enabling high-performance and sustainable computing in the future.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源