论文标题

具有异质体系结构的多层感知器的尴尬平行独立培训

Embarrassingly Parallel Independent Training of Multi-Layer Perceptrons with Heterogeneous Architectures

论文作者

Farias, Felipe Costa, Ludermir, Teresa Bernarda, Bastos-Filho, Carmelo Jose Albanez

论文摘要

神经网络体系结构的定义是执行最关键和最具挑战性的任务之一。在本文中,我们提出了平行密码。 ParallelMLPS是一个程序,可以通过探索现代CPU和GPU的局部性和并行功能的原理来训练具有不同数量的神经元和激活功能的几个独立的多层神经网络。该技术的核心思想是使用修改后的矩阵乘法,该矩阵乘法将序数矩阵乘法替换为两个简单的矩阵操作,这些矩阵操作允许单独且独立的梯度流动路径,可以在其他情况下使用。我们已经在模拟数据集中评估了我们的算法,该数据集使用10,000种不同的模型来改变样品,功能和批次的数量。如果与顺序方法相比,我们实现了从1到4个数量级的训练速度。

The definition of a Neural Network architecture is one of the most critical and challenging tasks to perform. In this paper, we propose ParallelMLPs. ParallelMLPs is a procedure to enable the training of several independent Multilayer Perceptron Neural Networks with a different number of neurons and activation functions in parallel by exploring the principle of locality and parallelization capabilities of modern CPUs and GPUs. The core idea of this technique is to use a Modified Matrix Multiplication that replaces an ordinal matrix multiplication by two simple matrix operations that allow separate and independent paths for gradient flowing, which can be used in other scenarios. We have assessed our algorithm in simulated datasets varying the number of samples, features and batches using 10,000 different models. We achieved a training speedup from 1 to 4 orders of magnitude if compared to the sequential approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源