TeleSparse: Practical Privacy-Preserving Verification of Deep Neural Networks
Authors: Mohammad M Maheri (Imperial College London), Hamed Haddadi (Imperial College London), Alex Davidson (LASIGE, Universidade de Lisboa)
Volume: 2025
Issue: 4
Pages: 861–880
DOI: https://doi.org/10.56553/popets-2025-0161
Abstract: Verification of the integrity of deep learning inference is crucial for understanding whether a model is being applied correctly. However, such verification typically requires access to model weights and (potentially sensitive or private) training data. So-called Zero-knowledge Succinct Non-Interactive Arguments of Knowledge (ZK-SNARKs) would appear to provide the capability to verify model inference without access to such sensitive data. However, applying ZK-SNARKs to modern neural networks, such as transformers and large vision models, introduces significant computational overhead. We present TeleSparse, a ZK-friendly post-processing mechanisms to produce practical solutions to this problem. TeleSparse tackles two fundamental challenges inherent in applying ZK-SNARKs to modern neural networks: (1) Reducing circuit constraints: Over-parameterized models result in numerous constraints for ZK-SNARK verification, driving up memory and proof generation costs. We address this by applying sparsification to neural network models, enhancing proof efficiency without compromising accuracy or security. (2) Minimizing the size of lookup tables required for non-linear functions, by optimizing activation ranges through neural teleportation, a novel adaptation for narrowing activation functions’ range. TeleSparse reduces prover memory usage by 67% and proof generation time by 46% on the same model, with an accuracy trade-off of approximately 1%. We implement our framework using the Halo2 proving system and demonstrate its effectiveness across multiple architectures (Vision-transformer, ResNet, MobileNet) and datasets (ImageNet,CIFAR-10,CIFAR-100). This work opens new directions for ZK-friendly model design, moving toward scalable, resource-efficient verifiable deep learning.
Keywords: Deep learning, verifiable machine learning, zero-knowledge proofs, verifiable neural network inference, model sparsification ZKP, neural network symmetries, privacy-preserving computation
Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.
