Poster

Information Laundering for Model Privacy

Xinran Wang · Yu Xiang · Jun Gao · Jie Ding

Keywords: [ Model privacy ] [ Adversarial Attack ] [ Machine Learning ] [ Security ] [ Privacy-utility tradeoff ]

Abstract:

In this work, we propose information laundering, a novel framework for enhancing model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use. The private model can be obtained from general learning methods, and its deployment means that it will return a deterministic or random response for a given input query. An information-laundered model consists of probabilistic components that deliberately maneuver the intended input and output for queries of the model, so the model's adversarial acquisition is less likely. Under the proposed framework, we develop an information-theoretic principle to quantify the fundamental tradeoffs between model utility and privacy leakage and derive the optimal design.

Chat is not available.