Skip to yearly menu bar Skip to main content

In-Person Poster presentation / poster accept

NANSY++: Unified Voice Synthesis with Neural Analysis and Synthesis

Hyeong-Seok Choi · Jinhyeok Yang · Juheon Lee · Hyeongju Kim

MH1-2-3-4 #57

Keywords: [ singing voice synthesis ] [ zero-shot voice conversion ] [ voice synthesis ] [ text-to-speech ] [ integrated framework ] [ voice designing ] [ Applications ]


Various applications of voice synthesis have been developed independently despite the fact that they generate “voice” as output in common. In addition, most of the voice synthesis models still require a large number of audio data paired with annotated labels (e.g., text transcription and music score) for training. To this end, we propose a unified framework of synthesizing and manipulating voice signals from analysis features, dubbed NANSY++. The backbone network of NANSY++ is trained in a self-supervised manner that does not require any annotations paired with audio. After training the backbone network, we efficiently tackle four voice applications - i.e. voice conversion, text-to-speech, singing voice synthesis, and voice designing - by partially modeling the analysis features required for each task. Extensive experiments show that the proposed framework offers competitive advantages such as controllability, data efficiency, and fast training convergence, while providing high quality synthesis. Audio samples:

Chat is not available.