论文标题
表征和驯服模型在边缘设备上的不稳定性
Characterizing and Taming Model Instability Across Edge Devices
论文作者
论文摘要
在不同边缘设备上运行的同一机器学习模型可能会在几乎相同的输入上产生高度发散的输出。发散的可能原因包括设备传感器,设备的信号处理硬件和软件及其操作系统和处理器的差异。本文介绍了跨实际移动设备模型预测中的变化的第一个有条理的表征。我们证明,准确性不是表征预测差异的有用度量,并引入了一种新的度量标准,可捕获这种变化。我们表征了不稳定性的不同来源,并表明压缩格式和图像信号处理的差异是对象分类模型中明显不稳定性的。值得注意的是,在我们的实验中,14-17%的图像在一个或多个电话模型中产生了分歧分类。我们评估了三种不同的技术来减少不稳定性。特别是,我们适应了先前的工作,以使模型可靠地对噪声,以便对模型进行微调,从而可以在边缘设备之间进行变化。我们证明我们的微调技术将不稳定性降低了75%。
The same machine learning model running on different edge devices may produce highly-divergent outputs on a nearly-identical input. Possible reasons for the divergence include differences in the device sensors, the device's signal processing hardware and software, and its operating system and processors. This paper presents the first methodical characterization of the variations in model prediction across real-world mobile devices. We demonstrate that accuracy is not a useful metric to characterize prediction divergence, and introduce a new metric, instability, which captures this variation. We characterize different sources for instability, and show that differences in compression formats and image signal processing account for significant instability in object classification models. Notably, in our experiments, 14-17% of images produced divergent classifications across one or more phone models. We evaluate three different techniques for reducing instability. In particular, we adapt prior work on making models robust to noise in order to fine-tune models to be robust to variations across edge devices. We demonstrate our fine-tuning techniques reduce instability by 75%.