ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

Censoring Representations with Multiple-Adversaries over Random Subspaces

Yusuke Iwasawa · Kotaro Nakayama · Yutaka Matsuo

East Meeting Level 8 + 15 #17

Adversarial feature learning has been successfully applied to censor the representations of neural networks; for example, AFL could help to learn anonymized representations to avoid privacy issues by constraining the representations with adversarial gradients that confuse the external discriminators that try to discern and extract sensitive information from the activations. In this paper, we propose the ensemble approach for the design of the discriminator based on the intuition that the discriminator need to be robust to the success of the AFL. The empirical validations on three user-anonymization tasks show that our proposed method achieves state-of-the-art performances in all three datasets without significantly harming the utility of data. We also provide initial theoretical results about the generalization error of the adversarial gradients, which suggest that the accuracy of the discriminator is not a deterministic factor for the design of the discriminator.

Live content is unavailable. Log in and register to view live content