Skip to yearly menu bar Skip to main content


Poster

Neural TTS Stylization with Adversarial and Collaborative Games

shuang ma · Daniel McDuff · Yale Song

Great Hall BC #76

Keywords: [ text-to-speech synthesis ] [ gans ]


Abstract:

The modeling of style when synthesizing natural human speech from text has been the focus of significant attention. Some state-of-the-art approaches train an encoder-decoder network on paired text and audio samples (xtxt, xaud) by encouraging its output to reconstruct xaud. The synthesized audio waveform is expected to contain the verbal content of xtxt and the auditory style of x_aud. Unfortunately, modeling style in TTS is somewhat under-determined and training models with a reconstruction loss alone is insufficient to disentangle content and style from other factors of variation. In this work, we introduce an end-to-end TTS model that offers enhanced content-style disentanglement ability and controllability. We achieve this by combining a pairwise training procedure, an adversarial game, and a collaborative game into one training scheme. The adversarial game concentrates the true data distribution, and the collaborative game minimizes the distance between real samples and generated samples in both the original space and the latent space. As a result, the proposed model delivers a highly controllable generator, and a disentangled representation. Benefiting from the separate modeling of style and content, our model can generate human fidelity speech that satisfies the desired style conditions. Our model achieves start-of-the-art results across multiple tasks, including style transfer (content and style swapping), emotion modeling, and identity transfer (fitting a new speaker's voice).

Live content is unavailable. Log in and register to view live content