Information Flow Control in Machine Learning through Modular Model Architecture
USENIX Security Symposium(2023)
摘要
In today's machine learning (ML) models, any part of the training data can
affect the model output. This lack of control for information flow from
training data to model output is a major obstacle in training models on
sensitive data when access control only allows individual users to access a
subset of data. To enable secure machine learning for access-controlled data,
we propose the notion of information flow control for machine learning, and
develop an extension to the Transformer language model architecture that
strictly adheres to the IFC definition we propose. Our architecture controls
information flow by limiting the influence of training data from each security
domain to a single expert module, and only enables a subset of experts at
inference time based on the access control policy.The evaluation using large
text and code datasets show that our proposed parametric IFC architecture has
minimal (1.9
accuracy (by 38
datasets) by enabling training on access-controlled data.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要