论文标题

Evojax:硬件加速神经进化

EvoJAX: Hardware-Accelerated Neuroevolution

论文作者

Tang, Yujin, Tian, Yingtao, Ha, David

论文摘要

进化计算已被证明是训练神经网络的一种非常有效的方法,尤其是在CPU群集上使用大规模时。最近的工作还展示了它们对硬件加速器(例如GPU)的有效性,但到目前为止,此类演示是针对非常具体的任务量身定制的,从而将适用性限制在其他领域。我们提出Evojax,这是一种可扩展的通用,硬件加速的神经进化工具包。我们的工具包在JAX库之上,使神经进化算法可以与多个TPU/GPU并行运行的神经网络合作。 Evojax通过在Numpy中实现Evolution算法,神经网络和任务来实现非常高的性能,Numpy汇编了即可在加速器上运行。我们为各种任务提供了可扩展的Evojax示例,包括监督学习,强化学习和生成艺术。由于EVOJAX可以在单个加速器上的几分钟内找到大多数这些任务的解决方案,而使用CPU时,我们的工具包可以大大缩短进化计算实验的迭代周期。 Evojax可从https://github.com/google/evojax获得

Evolutionary computation has been shown to be a highly effective method for training neural networks, particularly when employed at scale on CPU clusters. Recent work have also showcased their effectiveness on hardware accelerators, such as GPUs, but so far such demonstrations are tailored for very specific tasks, limiting applicability to other domains. We present EvoJAX, a scalable, general purpose, hardware-accelerated neuroevolution toolkit. Building on top of the JAX library, our toolkit enables neuroevolution algorithms to work with neural networks running in parallel across multiple TPU/GPUs. EvoJAX achieves very high performance by implementing the evolution algorithm, neural network and task all in NumPy, which is compiled just-in-time to run on accelerators. We provide extensible examples of EvoJAX for a wide range of tasks, including supervised learning, reinforcement learning and generative art. Since EvoJAX can find solutions to most of these tasks within minutes on a single accelerator, compared to hours or days when using CPUs, our toolkit can significantly shorten the iteration cycle of evolutionary computation experiments. EvoJAX is available at https://github.com/google/evojax

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源