University of Cambridge > Talks.cam > Machine Learning @ CUED > Short talks: Mixed Cumulative Distribution Networks; Nonparametric Bayesian community discovery in social networks; Expectation Propagation for Dirichlet Process Mixture Models

Short talks: Mixed Cumulative Distribution Networks; Nonparametric Bayesian community discovery in social networks; Expectation Propagation for Dirichlet Process Mixture Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Sinead Williamson.

Three short talks by PhD students from the Gatsby Unit, UCL .

Charles Blundell: Mixed Cumulative Distribution Networks (Ricardo Silva, Charles Blundell and Yee Whye Teh)

Acyclic directed mixed graphs (ADMGs) are generalizations of DAGs that can succinctly capture much richer sets of conditional independencies, and are especially useful in modeling the effects of latent variables implicitly. Unfortunately, there are currently no parameterizations of general ADM Gs. In this work we apply recent work on cumulative distribution networks and copulas to propose one general construction for ADMG models.

Lloyd Elliot: Nonparametric Bayesian community discovery in social networks (Lloyd Elliott and Yee Whye Teh)

We introduce a novel prior on random graphs using a beta process. The atoms of the beta process represent communities and the edges of the graph are independent given the latent community structure. We use MCMC sampling methods to infer the community structure and to impute missing links in large social network data sets. We use split-merge updates to increase the effective sample size of the MCMC chain and improve the predictive probabilities.

Vinayak Rao: Expectation Propagation for Dirichlet Process Mixture Models (with Erik Sudderth and Yee Whye Teh)

We explore Expectation Propagation for approximate inference in the DP mixture model. By considering three related representations of the DP (based on the Polya urn and Chinese restaurant process), we derive three different EP approximation algorithms. The simplest of these is the approximation studied in Minka and Ghahramani (2003). While this does not represent information about the posterior clustering structure, the other two novel approaches include additional latent variables to capture this clustering structure and offer richer posterior representations. We also elaborate on improvements to the basic EP algorithms: reducing computational costs by removing low probability components, and learning the hyperparameters of the DP mixture model.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity