Tunable Measures for Information Leakage and Applications to Privacy-Utility Tradeoffs

arXiv (Cornell University)(2019)

引用 68|浏览87
暂无评分
摘要
We introduce a tunable measure for information leakage called maximal alpha-leakage. This measure quantifies the maximal gain of an adversary in inferring any (potentially random) function of a dataset from a release of the data. The inferential capability of the adversary is, in turn, quantified by a class of adversarial loss functions that we introduce as $\alpha$-loss, $\alpha\in[1,\infty]$. The choice of $\alpha$ determines the specific adversarial action and ranges from refining a belief (about any function of the data) for $\alpha=1$ to guessing the most likely value for $\alpha=\infty$ while refining the $\alpha^{th}$ moment of the belief for $\alpha$ in between. Maximal alpha-leakage then quantifies the adversarial gain under $\alpha$-loss over all possible functions of the data. In particular, for the extremal values of $\alpha=1$ and $\alpha=\infty$, maximal alpha-leakage simplifies to mutual information and maximal leakage, respectively. For $\alpha\in(1,\infty)$ this measure is shown to be the Arimoto channel capacity of order $\alpha$. We show that maximal alpha-leakage satisfies data processing inequalities and a sub-additivity property thereby allowing for a weak composition result. Building upon these properties, we use maximal alpha-leakage as the privacy measure and study the problem of data publishing with privacy guarantees, wherein the utility of the released data is ensured via a hard distortion constraint. Unlike average distortion, hard distortion provides a deterministic guarantee of fidelity. We show that under a hard distortion constraint, for $\alpha>1$ the optimal mechanism is independent of $\alpha$, and therefore, the resulting optimal tradeoff is the same for all values of $\alpha>1$. Finally, the tunability of maximal alpha-leakage as a privacy measure is also illustrated for binary data with average Hamming distortion as the utility measure.
更多
查看译文
关键词
Mutual information,maximal leakage,maximal alpha-leakage,Sibson mutual information,Arimoto mutual information,f-divergence,privacy-utility tradeoff,hard distortion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要