BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Short talks: Mixed Cumulative Distribution Networks\; Nonparametri
 c Bayesian community discovery in social networks\; Expectation Propagatio
 n for Dirichlet Process Mixture Models - Charles Blundell\, Lloyd Elliot a
 nd Vinayak Rao\, Gatsby Unit\, UCL
DTSTART:20101105T133000Z
DTEND:20101105T150000Z
UID:TALK27812@talks.cam.ac.uk
CONTACT:Sinead Williamson
DESCRIPTION:Three short talks by PhD students from the Gatsby Unit\, UCL.\
 n\nCharles Blundell: Mixed Cumulative Distribution Networks (Ricardo\nSilv
 a\, Charles Blundell and Yee Whye Teh)\n\nAcyclic directed mixed graphs (A
 DMGs) are generalizations of DAGs that can\nsuccinctly capture much richer
  sets of conditional independencies\, and are\nespecially useful in modeli
 ng the effects of latent variables implicitly.\nUnfortunately\, there are 
 currently no parameterizations of general ADMGs. In\nthis work we apply re
 cent work on cumulative distribution networks and copulas\nto propose one 
 general construction for ADMG models.\n\n\nLloyd Elliot: Nonparametric Bay
 esian community discovery in social\nnetworks (Lloyd Elliott and Yee Whye 
 Teh)\n\nWe introduce a novel prior on random graphs using a beta process. 
  The\natoms of the beta process represent communities and the edges of the
 \ngraph are independent given the latent community structure.  We use\nMCM
 C sampling methods to infer the community structure and to impute\nmissing
  links in large social network data sets.  We use split-merge\nupdates to 
 increase the effective sample size of the MCMC chain and\nimprove the pred
 ictive probabilities.\n\n\nVinayak Rao: Expectation Propagation for Dirich
 let Process Mixture Models (with\nErik Sudderth and Yee Whye Teh)\n\nWe ex
 plore Expectation Propagation for approximate inference in the DP\nmixture
  model. By considering three related representations of the DP\n(based on 
 the Polya urn and Chinese restaurant process)\, we derive\nthree different
  EP approximation algorithms. The simplest of these is\nthe approximation 
 studied in Minka and Ghahramani (2003).  While this\ndoes not represent in
 formation about the posterior clustering\nstructure\, the other two novel 
 approaches include additional latent\nvariables to capture this clustering
  structure and offer richer\nposterior representations. We also elaborate 
 on improvements to the\nbasic EP algorithms: reducing computational costs 
 by removing low\nprobability components\, and learning the hyperparameters
  of the DP\nmixture model.
LOCATION:Engineering Department\, CBL Room 438
END:VEVENT
END:VCALENDAR
