University of Cambridge > > Signal Processing and Communications Lab Seminars > Sparse trees and long memory: Bayesian inference for discrete time series

Sparse trees and long memory: Bayesian inference for discrete time series

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Louise Segar.

We discuss novel methodological tools for effective Bayesian inference and model selection for general discrete time series data. The starting point of our approach is the use of a rich class of Bayesian hierarchical models, and the observation that the so-called “context tree weighting” algorithm developed by Willems and co-authors in the early 1990s in the information-theoretic literature, admits broad extensions that provide effective computational tools for inference in very general settings. We will introduce a new class of priors on variable-memory Markov models, an MCMC Metropolis-within-Gibbs sampler for exploring the full posterior distribution on model space, and an unusual family of exact algorithms for inference.

Applications range from the classical tasks of estimation and model selection to more application-specific problems including segmentation, anomaly detection, entropy estimation, causality testing, and on-line prediction with “big data.” Our algorithmic, methodological and theoretical results are illustrated by extensive computational experiments on both synthetic and real data. Specific applications to data compression, neuroscience, finance, genetics, and animal communication will be mentioned briefly.

This talk is part of the Signal Processing and Communications Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity