论文标题
镜子空间中的聚合(目标):军事环境中的快速,准确的分布式机器学习
Aggregation in the Mirror Space (AIMS): Fast, Accurate Distributed Machine Learning in Military Settings
论文作者
论文摘要
分布式机器学习(DML)对于现代军事力量来说可能是利用在多个有利位置的数据和设备进行适应和学习的重要能力。但是,现有的分布式机器学习框架无法实现DML的全部好处,因为它们都是基于简单的线性聚合框架,但是线性聚合无法处理军事设置中产生的$ \ textit {Divergence挑战} $:在不同设备上的学习数据可以是不同的($ \ textit to to to toceit for to textit for to to todit)data nont of todit data nontion data $ non-ever。要进行通信的设备受到很大的限制($ \ textit {i.e。} $,由于稀疏和动态通信而引起的弱连接性),从而降低了设备调和模型差异的能力。在本文中,我们引入了一个新型的DML框架,称为镜像空间中的聚合(AIMS),该框架允许DML系统引入通用的镜像函数,以将模型映射到镜像空间中以进行聚合和梯度下降。根据发散力来调节镜像函数的凸功率,AIMS允许自动优化DML。我们同时进行严格的分析和广泛的实验评估,以证明目标的好处。 For example, we prove that AIMS achieves a loss of $O\left((\frac{m^{r+1}}{T})^{\frac1r}\right)$ after $T$ network-wide updates, where $m$ is the number of devices and $r$ the convexity of the mirror function, with existing linear aggregation frameworks being a special case with $r=2$.我们使用Emane(可扩展的移动临时网络模拟器)进行军事通信设置的实验评估显示出相似的结果:目标可以提高DML收敛速率高达57%\%,并且可以很好地扩展到更弱的连接性较弱的设备,与传统线性汇总相比,所有其他计算都很少的计算额外费用。
Distributed machine learning (DML) can be an important capability for modern military to take advantage of data and devices distributed at multiple vantage points to adapt and learn. The existing distributed machine learning frameworks, however, cannot realize the full benefits of DML, because they are all based on the simple linear aggregation framework, but linear aggregation cannot handle the $\textit{divergence challenges}$ arising in military settings: the learning data at different devices can be heterogeneous ($\textit{i.e.}$, Non-IID data), leading to model divergence, but the ability for devices to communicate is substantially limited ($\textit{i.e.}$, weak connectivity due to sparse and dynamic communications), reducing the ability for devices to reconcile model divergence. In this paper, we introduce a novel DML framework called aggregation in the mirror space (AIMS) that allows a DML system to introduce a general mirror function to map a model into a mirror space to conduct aggregation and gradient descent. Adapting the convexity of the mirror function according to the divergence force, AIMS allows automatic optimization of DML. We conduct both rigorous analysis and extensive experimental evaluations to demonstrate the benefits of AIMS. For example, we prove that AIMS achieves a loss of $O\left((\frac{m^{r+1}}{T})^{\frac1r}\right)$ after $T$ network-wide updates, where $m$ is the number of devices and $r$ the convexity of the mirror function, with existing linear aggregation frameworks being a special case with $r=2$. Our experimental evaluations using EMANE (Extendable Mobile Ad-hoc Network Emulator) for military communications settings show similar results: AIMS can improve DML convergence rate by up to 57\% and scale well to more devices with weak connectivity, all with little additional computation overhead compared to traditional linear aggregation.