BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning @ CUED
SUMMARY:Rejection Sampling Variational Inference - Francis
co J. R. Ruiz (Columbia University &\; Universi
ty of Cambridge)
DTSTART;TZID=Europe/London:20161122T113000
DTEND;TZID=Europe/London:20161122T123000
UID:TALK69280AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/69280
DESCRIPTION:Talk based on https://arxiv.org/abs/1610.05683\, f
or which the abstract is:\n\n"Variational inferenc
e using the reparameterization trick has enabled l
arge-scale approximate Bayesian inference in compl
ex probabilistic models\, leveraging stochastic op
timization to sidestep intractable expectations. T
he reparameterization trick is applicable when we
can simulate a random variable by applying a (diff
erentiable) deterministic function on an auxiliary
random variable whose distribution is fixed. For
many distributions of interest (such as the gamma
or Dirichlet)\, simulation of random variables rel
ies on rejection sampling. The discontinuity intro
duced by the accept--reject step means that standa
rd reparameterization tricks are not applicable. W
e propose a new method that lets us leverage repar
ameterization gradients even when variables are ou
tputs of a rejection sampling algorithm. Our appro
ach enables reparameterization on a larger class o
f variational distributions. In several studies of
real and synthetic data\, we show that the varian
ce of the estimator of the gradient is significant
ly lower than other state-of-the-art methods. This
leads to faster convergence of stochastic optimiz
ation variational inference."
LOCATION:CBL Room BE-438
CONTACT:
END:VEVENT
END:VCALENDAR