|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
The effect of normalization -- a case study in speech synthesis
If you have a question about this talk, please contact Shakir Mohamed.
Undirected graphical models are ubiquitous in application domains of machine learning. However the normalization constants in these models are often difficult to compute, and as a result are frequently dropped altogether. In this talk we’ll look at the qualitative effect of this lack of normalization in the domain of statistical speech synthesis.
Specifically we’ll compare the predictive distributions of the standard unnormalized speech synthesis model, its globally-normalized undirected counterpart, and a more tractable directed graphical model. Along the way, we’ll highlight some of the general issues surrounding the choice between undirected and directed graphical models for sequence data.
The introduction to speech synthesis I will give will be aimed entirely at machine learners with no background in modelling speech, and will I hope be realistic (close to state-of-the-art), self-contained and framed in terms of probabilistic modelling.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsCambridge Central Asia Forum Amnesty Modelling in Diabetes
Other talksWhite Dwarfs in Binary Stars From the Pill to the Pen: an autobiography by Professor Carl Djerassi Spaces of Exile and Solidarity; Conviviality, Contention and Transnational Activism for Zimbabwe 1965-1980 Climate Ethics and Justice for Africa in the Post-2020 Climate Regime (King's/Cambridge-Africa Seminar) Macrophage regulation of tumor responses to anticancer therapies Particles and mixing in turbulent plumes and fountains