$d_{\mathcal{X}}$-Private Mechanisms for Linear Queries.

arXiv: Machine Learning(2018)

引用 23|浏览77
暂无评分
摘要
Differential Privacy is one of the strongest privacy guarantees, which allows the release of useful information about any sensitive dataset. However, it provides the same level of protection for all elements in the data universe. In this paper, we consider $d_{mathcal{X}}$-privacy, an instantiation of the privacy notion introduced in cite{chatzikokolakis2013broadening}, which allows specifying a separate privacy budget for each pair of elements in the data universe. We describe a systematic procedure to tailor any existing differentially private mechanism into a $d_{mathcal{X}}$-private variant for the case of linear queries. For the resulting $d_{mathcal{X}}$-private mechanisms, we provide theoretical guarantees on the trade-off between utility and privacy, and show that they always outperform their emph{vanilla} counterpart. We demonstrate the effectiveness of our procedure, by evaluating the proposed $d_{mathcal{X}}$-private Laplace mechanism on both synthetic and real datasets using a set of randomly generated linear queries.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要