COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning @ CUED > Short talks: Mixed Cumulative Distribution Networks; Nonparametric Bayesian community discovery in social networks; Expectation Propagation for Dirichlet Process Mixture Models
Short talks: Mixed Cumulative Distribution Networks; Nonparametric Bayesian community discovery in social networks; Expectation Propagation for Dirichlet Process Mixture ModelsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Sinead Williamson. Three short talks by PhD students from the Gatsby Unit, UCL . Charles Blundell: Mixed Cumulative Distribution Networks (Ricardo Silva, Charles Blundell and Yee Whye Teh) Acyclic directed mixed graphs (ADMGs) are generalizations of DAGs that can succinctly capture much richer sets of conditional independencies, and are especially useful in modeling the effects of latent variables implicitly. Unfortunately, there are currently no parameterizations of general ADM Gs. In this work we apply recent work on cumulative distribution networks and copulas to propose one general construction for ADMG models. Lloyd Elliot: Nonparametric Bayesian community discovery in social networks (Lloyd Elliott and Yee Whye Teh) We introduce a novel prior on random graphs using a beta process. The atoms of the beta process represent communities and the edges of the graph are independent given the latent community structure. We use MCMC sampling methods to infer the community structure and to impute missing links in large social network data sets. We use split-merge updates to increase the effective sample size of the MCMC chain and improve the predictive probabilities. Vinayak Rao: Expectation Propagation for Dirichlet Process Mixture Models (with Erik Sudderth and Yee Whye Teh) We explore Expectation Propagation for approximate inference in the DP mixture model. By considering three related representations of the DP (based on the Polya urn and Chinese restaurant process), we derive three different EP approximation algorithms. The simplest of these is the approximation studied in Minka and Ghahramani (2003). While this does not represent information about the posterior clustering structure, the other two novel approaches include additional latent variables to capture this clustering structure and offer richer posterior representations. We also elaborate on improvements to the basic EP algorithms: reducing computational costs by removing low probability components, and learning the hyperparameters of the DP mixture model. This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsPragmatics reading group Profitable business investment proposal, notify me if interested Clinical Science SeminarsOther talksSingle Cell Seminars (October) Britain, Jamaica and the modern global financial order, 1800-50 An intellectual history of the universal basic income UK 7T travelling-head study: pilot results Loss and damage: Insights from the front lines in Bangladesh TODAY Foster Talk - "Paraspeckles, TDP-43 & alternative polyadenylation: how regulation of a membraneless compartment guides cell fate" Cambridge-Lausanne Workshop 2018 - Day 2 Single Cell Seminars (November) Networks, resilience and complexity Towards a whole brain model of perceptual learning Throwing light on organocatalysis: new opportunities in enantioselective synthesis The formation of high density dust rings and clumps: the role of vorticity |