论文标题
用零知识证明来扩展无信任的DNN推断
Scaling up Trustless DNN Inference with Zero-Knowledge Proofs
论文作者
论文摘要
随着ML模型的功能和准确性的提高,其部署的复杂性也是如此。 ML模型消费者越来越多地转向服务提供商,以ML-AS-AS-Service(MLAAS)范式为ML模型提供服务。随着MLAA的增殖,出现了一个关键的要求:面对恶意,懒惰或越野车服务提供商,模型消费者如何验证是否提供正确的预测? 在这项工作中,我们提出了第一种实用的成像尺度方法,用于非交互性验证ML模型推断,即在推断完成后。为此,我们利用ZK-SNARKS的最新发展(零知识简洁的非交互性知识论点),这是一种零知识证明的形式。 ZK-SNARKS允许我们非交互验证ML模型执行,并且仅使用标准加密硬度假设。特别是,我们为完整分辨率的成像网模型提供了第一个有效推断的ZK-SNARK证明,可实现79 \%TOP-5的精度。我们进一步使用这些ZK-SNARK来设计协议,以在各种方案中验证ML模型执行,包括用于验证MLAAS预测,验证MLAAS模型的准确性以及使用ML模型进行无信任的检索。总之,我们的结果表明,ZK-SNARKS有望做出经过验证的ML模型推断。
As ML models have increased in capabilities and accuracy, so has the complexity of their deployments. Increasingly, ML model consumers are turning to service providers to serve the ML models in the ML-as-a-service (MLaaS) paradigm. As MLaaS proliferates, a critical requirement emerges: how can model consumers verify that the correct predictions were served, in the face of malicious, lazy, or buggy service providers? In this work, we present the first practical ImageNet-scale method to verify ML model inference non-interactively, i.e., after the inference has been done. To do so, we leverage recent developments in ZK-SNARKs (zero-knowledge succinct non-interactive argument of knowledge), a form of zero-knowledge proofs. ZK-SNARKs allows us to verify ML model execution non-interactively and with only standard cryptographic hardness assumptions. In particular, we provide the first ZK-SNARK proof of valid inference for a full resolution ImageNet model, achieving 79\% top-5 accuracy. We further use these ZK-SNARKs to design protocols to verify ML model execution in a variety of scenarios, including for verifying MLaaS predictions, verifying MLaaS model accuracy, and using ML models for trustless retrieval. Together, our results show that ZK-SNARKs have the promise to make verified ML model inference practical.