University of Cambridge > Talks.cam > Microsoft Research Machine Learning and Perception Seminars > Generative models for audio and music processing

Generative models for audio and music processing

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Oliver Williams.

The analysis of audio signals is central to the scientific understanding of human hearing abilities and in a broad spectrum of engineering applications such as sound localisation, hearing aids or music information retrieval. Historically, the main mathematical tools are from signal processing: digital filtering theory, system identification and various transform methods such as Fourier techniques. In recent years, there is an increasing interest for statistical approaches and tools from machine learning. The application of statistical techniques is quite natural: acoustical time series can be conveniently modelled using hierarchical signal models by incorporating prior knowledge from various sources: from physics or studies of human cognition and perception. Once a realistic hierarchical model is constructed, many tasks such as coding, analysis, restoration, transcription, separation, identification or resynthesis can be formulated consistently as Bayesian posterior inference problems. In this talk, I will sketch our current work on audio and music signal analysis. In particular, I will illustrate various realistic generative signal models such as factorial switching state space models, Gamma-Markov random fields and point process models for music transcription, restoration and source separation. Some models admit exact inference, otherwise efficient algorithms based on variational or stochastic approximation methods can be developed.

This talk is part of the Microsoft Research Machine Learning and Perception Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity