Skip to yearly menu bar Skip to main content


Poster

Sample Efficient Adaptive Text-to-Speech

Yutian Chen · Yannis M Assael · Brendan Shillingford · David Budden · Scott Reed · Heiga Zen · Quan Wang · Luis C. Cobo · Andrew Trask · Ben Laurie · Caglar Gulcehre · Aaron van den Oord · Oriol Vinyals · Nando de Freitas

Great Hall BC #42

Keywords: [ meta learning ] [ few shot ] [ text to speech ] [ wavenet ]


Abstract:

We present a meta-learning approach for adaptive text-to-speech (TTS) with few data. During training, we learn a multi-speaker model using a shared conditional WaveNet core and independent learned embeddings for each speaker. The aim of training is not to produce a neural network with fixed weights, which is then deployed as a TTS system. Instead, the aim is to produce a network that requires few data at deployment time to rapidly adapt to new speakers. We introduce and benchmark three strategies: (i) learning the speaker embedding while keeping the WaveNet core fixed, (ii) fine-tuning the entire architecture with stochastic gradient descent, and (iii) predicting the speaker embedding with a trained neural network encoder. The experiments show that these approaches are successful at adapting the multi-speaker neural network to new speakers, obtaining state-of-the-art results in both sample naturalness and voice similarity with merely a few minutes of audio data from new speakers.

Live content is unavailable. Log in and register to view live content