论文标题
Proximu $:通过近缓存计算有效地缩放多核CPU中的DNN推断
Proximu$: Efficiently Scaling DNN Inference in Multi-core CPUs through Near-Cache Compute
论文作者
论文摘要
深度神经网络(DNN)推断正在成为众多公用事业和服务的基本基石。 CPU继续扩大其原始计算功能的DNN推断以及成熟的高性能库,以提取最佳性能。尽管通用CPU在数据中心和边缘都为DNN推断提供了独特的有吸引力的优势,但它们主要进化以优化单线性能。对于高度平行的,面向吞吐量的DNN推断,这会导致功率和性能效率低下,从而影响原始性能缩放和整体性能/瓦特。 我们提出了proximu $ \ $$,我们可以系统地解决CPU DNN推理的功率和性能缩放效率无效的问题。性能通过在多级高速缓存层次结构中的所有缓存附近分配轻量张量计算来有效地缩放。这最大程度地利用了系统中现有带宽资源的累积利用,并最大程度地利用了数据的移动。通过编码结构化的循环工作负载行为的简单ISA扩展,可以大大降低功率。这使大量的预删除工作可以卸载,在轻量重量近储备单元中循环展开,有效地绕开了渴望量的量阶段(OOO)CPU管道。 在许多DNN型号中,Proximu $ \ $$的卷积性能/瓦增长了2.3倍,原始性能的比例为2倍至3.94倍。同样,proximu $ \ $$可实现1.8倍的内部产品性能/瓦特,性能缩放2.8倍。 With no changes to the programming model, no increase in cache capacity or bandwidth and minimal additional hardware, Proximu$\$$ enables unprecedented CPU efficiency gains while achieving similar performance to state-of-the-art Domain Specific Accelerators (DSA) for DNN inference in this AI era.
Deep Neural Network (DNN) inference is emerging as the fundamental bedrock for a multitude of utilities and services. CPUs continue to scale up their raw compute capabilities for DNN inference along with mature high performance libraries to extract optimal performance. While general purpose CPUs offer unique attractive advantages for DNN inference at both datacenter and edge, they have primarily evolved to optimize single thread performance. For highly parallel, throughput-oriented DNN inference, this results in inefficiencies in both power and performance, impacting both raw performance scaling and overall performance/watt. We present Proximu$\$$, where we systematically tackle the root inefficiencies in power and performance scaling for CPU DNN inference. Performance scales efficiently by distributing light-weight tensor compute near all caches in a multi-level cache hierarchy. This maximizes the cumulative utilization of the existing bandwidth resources in the system and minimizes movement of data. Power is drastically reduced through simple ISA extensions that encode the structured, loop-y workload behavior. This enables a bulk offload of pre-decoded work, with loop unrolling in the light-weight near-cache units, effectively bypassing the power-hungry stages of the wide Out-of-Order (OOO) CPU pipeline. Across a number of DNN models, Proximu$\$$ achieves a 2.3x increase in convolution performance/watt with a 2x to 3.94x scaling in raw performance. Similarly, Proximu$\$$ achieves a 1.8x increase in inner-product performance/watt with 2.8x scaling in performance. With no changes to the programming model, no increase in cache capacity or bandwidth and minimal additional hardware, Proximu$\$$ enables unprecedented CPU efficiency gains while achieving similar performance to state-of-the-art Domain Specific Accelerators (DSA) for DNN inference in this AI era.