FOBNN: Fast Oblivious Binarized Neural Network Inference
arxiv(2024)
摘要
The superior performance of deep learning has propelled the rise of Deep
Learning as a Service, enabling users to transmit their private data to service
providers for model execution and inference retrieval. Nevertheless, the
primary concern remains safeguarding the confidentiality of sensitive user data
while optimizing the efficiency of secure protocols. To address this, we
develop a fast oblivious binarized neural network inference framework, FOBNN.
Specifically, we customize binarized convolutional neural networks to enhance
oblivious inference, design two fast algorithms for binarized convolutions, and
optimize network structures experimentally under constrained costs. Initially,
we meticulously analyze the range of intermediate values in binarized
convolutions to minimize bit representation, resulting in the Bit Length
Bounding (BLB) algorithm. Subsequently, leveraging the efficiency of bitwise
operations in BLB, we further enhance performance by employing pure bitwise
operations for each binary digit position, yielding the Layer-wise Bit
Accumulation (LBA) algorithm. Theoretical analysis validates FOBNN's security
and indicates up to 2 × improvement in computational and communication
costs compared to the state-of-the-art method. We demonstrates our framework's
effectiveness in RNA function prediction within bioinformatics. Rigorous
experimental assessments confirm that our oblivious inference solutions not
only maintain but often exceed the original accuracy, surpassing prior efforts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要