University of Cambridge > > CUED Speech Group Seminars > Low-resource expressive text-to-speech using data augmentation

Low-resource expressive text-to-speech using data augmentation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Kate Knill.

Abstract: While recent neural text-to-speech (TTS) systems perform remarkably well, they typically require a substantial amount of recordings from the target speaker reading in the desired speaking style. In this work, we present a novel 3-step methodology to circumvent the costly operation of recording large amounts of target data in order to build expressive style voices with as little as 15 minutes of such recordings. First, we augment data via voice conversion by leveraging recordings in the desired speaking style from other speakers. Next, we use that synthetic data on top of the available recordings to train a TTS model. Finally, we fine-tune that model to further increase quality. Our evaluations show that the proposed changes bring significant improvements over non-augmented models across many perceived aspects of synthesised speech. We demonstrate the proposed approach on 2 styles (newscaster and conversational), on various speakers, and on both single and multi-speaker models, illustrating the robustness of our approach.

Bio: Thomas Merritt is an applied scientist at Amazon, based in Cambridge. Thomas received his PhD from the University of Edinburgh in 2016. The title of his thesis is: Overcoming the limitations of statistical parametric speech synthesis. Since graduating he has been working on text-to-speech research at Amazon, focusing on improvements to prosody and overall naturalness of synthesised speech.

This talk is part of the CUED Speech Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity