Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Generative Modeling Helps Weak Supervision (and Vice Versa)

Benedikt Boecking · Nicholas Roberts · Willie Neiswanger · Stefano Ermon · Frederic Sala · Artur Dubrawski

MH1-2-3-4 #76

Keywords: [ General Machine Learning ] [ generative model ] [ weak supervision ]


Abstract:

Many promising applications of supervised machine learning face hurdles in the acquisition of labeled data in sufficient quantity and quality, creating an expensive bottleneck. To overcome such limitations, techniques that do not depend on ground truth labels have been studied, including weak supervision and generative modeling. While these techniques would seem to be usable in concert, improving one another, how to build an interface between them is not well-understood. In this work, we propose a model fusing programmatic weak supervision and generative adversarial networks and provide theoretical justification motivating this fusion. The proposed approach captures discrete latent variables in the data alongside the weak supervision derived label estimate. Alignment of the two allows for better modeling of sample-dependent accuracies of the weak supervision sources, improving the estimate of unobserved labels. It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels. Additionally, its learned latent variables can be inspected qualitatively. The model outperforms baseline weak supervision label models on a number of multiclass image classification datasets, improves the quality of generated images, and further improves end-model performance through data augmentation with synthetic samples.

Chat is not available.