论文标题
壁虎:在嵌入深度学习中调和隐私,准确性和效率
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
论文作者
论文摘要
嵌入式系统要求使用神经网络(NNS)对数据进行设备处理,同时符合记忆,功率和计算限制,从而实现效率和准确的权衡。为了将NNS带到边缘设备,已广泛采用了一些优化,例如通过修剪,量化和现成的架构进行高效设计的模型压缩。这些算法部署到现实世界敏感的应用程序时,需要抵制推理攻击以保护用户培训数据的隐私。但是,对推理攻击的阻力并不是针对物联网设计NN模型的原因。在这项工作中,我们分析了IoT设备的NNS中的三维隐私性 - 准确性效率折衷,并提出了壁虎培训方法,我们明确地对私人推论作为设计目标,对私人推论显然增加了抵抗力。我们优化了嵌入式设备的推理时间内存,计算和功率约束,作为设计NN体系结构的标准,同时还保留了隐私。我们选择量化作为高效和私人模型的设计选择。这种选择是由观察结果驱动的,即压缩模型与基线模型相比泄漏更多的信息,而现成的有效体系结构表明效率差和隐私权折衷。我们表明,使用壁虎方法训练的模型可以与先前针对黑盒会员攻击的防御措施,同时提供效率。
Embedded systems demand on-device processing of data using Neural Networks (NNs) while conforming to the memory, power and computation constraints, leading to an efficiency and accuracy tradeoff. To bring NNs to edge devices, several optimizations such as model compression through pruning, quantization, and off-the-shelf architectures with efficient design have been extensively adopted. These algorithms when deployed to real world sensitive applications, requires to resist inference attacks to protect privacy of users training data. However, resistance against inference attacks is not accounted for designing NN models for IoT. In this work, we analyse the three-dimensional privacy-accuracy-efficiency tradeoff in NNs for IoT devices and propose Gecko training methodology where we explicitly add resistance to private inferences as a design objective. We optimize the inference-time memory, computation, and power constraints of embedded devices as a criterion for designing NN architecture while also preserving privacy. We choose quantization as design choice for highly efficient and private models. This choice is driven by the observation that compressed models leak more information compared to baseline models while off-the-shelf efficient architectures indicate poor efficiency and privacy tradeoff. We show that models trained using Gecko methodology are comparable to prior defences against black-box membership attacks in terms of accuracy and privacy while providing efficiency.