Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Tiny Papers Poster Session 5

DSF-GAN: Downstream Feedback Generative Adversarial Network

Oriel Perets · Nadav Rappoport

Halle B #300
[ ] [ Project Page ]
Thu 9 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Utility and privacy are two crucial measurements of synthetic tabular data. While privacy measures have been dramatically improved with the use of Generative Adversarial Networks (GANs), generating high-utility synthetic samples remains challenging. To increase the samples' utility, we propose a novel architecture called DownStream Feedback Generative Adversarial Network (DSF-GAN). This approach uses feedback from a downstream prediction model mid-training, to add valuable information to the generator’s loss function. Hence, DSF-GAN harnesses a downstream prediction task to increase the utility of the synthetic samples. To properly evaluate our method, we tested it using two popular data sets. Our experiments show better model performance when training on DSF-GAN-generated synthetic samples compared to synthetic data generated using the same GAN architecture without feedback when evaluated on the same validation set comprised of real samples. All code and datasets used in this research are openly available for ease of reproduction.

Live content is unavailable. Log in and register to view live content