Skip to yearly menu bar Skip to main content


Poster

Adversarial Audio Synthesis

Chris Donahue · Julian McAuley · Miller Puckette

Great Hall BC #26

Keywords: [ gan ] [ adversarial ] [ audio ] [ waveform ] [ spectrogram ] [ wavegan ] [ specgan ]


Abstract:

Audio signals are sampled at high temporal resolutions, and learning to synthesize audio requires capturing structure across a range of timescales. Generative adversarial networks (GANs) have seen wide success at generating images that are both locally and globally coherent, but they have seen little application to audio generation. In this paper we introduce WaveGAN, a first attempt at applying GANs to unsupervised synthesis of raw-waveform audio. WaveGAN is capable of synthesizing one second slices of audio waveforms with global coherence, suitable for sound effect generation. Our experiments demonstrate that—without labels—WaveGAN learns to produce intelligible words when trained on a small-vocabulary speech dataset, and can also synthesize audio from other domains such as drums, bird vocalizations, and piano. We compare WaveGAN to a method which applies GANs designed for image generation on image-like audio feature representations, finding both approaches to be promising.

Live content is unavailable. Log in and register to view live content