Securely and Efficiently Outsourcing Neural Network Inference via Parallel MSB Extraction

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览0
暂无评分
摘要
Outsourcing neural network (NN) inference services to the cloud gives rise to considerable privacy concerns about the model provider’s proprietary model and the user’s private data. Current cryptography-based secure NN inference schemes are not suited for high-latency networks due to their numerous communication overhead for computing the non-linear components of neural networks. In this paper, we present ParaNN, a secure cloud-based outsourced computation framework that supports lightweight secure neural network inference. At the core of ParaNN, we design a secure and parallel method for extracting the most significant bit (MSB) based on a parallel prefix adder. This forms the cornerstone for a series of secure and communication-efficient computation protocols specifically tailored to non-linear layers like ReLU and Maxpool. Our experiments show that ParaNN achieves a 6.7×-27.4× improvement in online inference time over wide area networks (WAN) compared to the state-of-the-art works.
更多
查看译文
关键词
Neural network inference,secure computation,secret sharing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要