Sound examples
Contact: {jordi.bonada, merlijn.blaauw}@upf.edu
Submitted to ICASSP 2021, June 6-11, 2021, Toronto, Canada.
[other singing synthesis demos]
Abstract
We propose a semi-supervised singing synthesizer, which is able to learn new voices from audio data only, without any annotations such as phonetic segmentation. Our system is an encoder-decoder model with two encoders, linguistic and acoustic, and one (acoustic) decoder. In a first step, the system is trained in a supervised manner, using a labelled multi-singer dataset. Here, we ensure that the embeddings produced by both encoders are similar, so that we can later use the model with either acoustic or linguistic input features. To learn a new voice in an unsupervised manner, the pretrained acoustic encoder is used to train a decoder for the target singer. Finally, at inference, the pretrained linguistic encoder is used together with the decoder of the new voice, to produce acoustic features from linguistic input. We evaluate our system with a listening test and show that the results are comparable to those obtained with an equivalent supervised approach.
Demo songs generated with the proposed semi-supervised model trained on a full target dataset (2h7min), consisting of just audio (no other annotations were used).
For these demos the model was controlled by a timed phonetic sequence and F0, in this case obtained from a reference recording.
The waveform was generated from the predicted mel-spectrogram using a neural vocoder. The final mix has effects and background music.
This work was funded by TROMPA H2020 No 770376.