BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning Reading Group @ CUED
SUMMARY:Variational Bayes as Surrogate Regression - Will T
ebbutt (University of Cambridge)
DTSTART;TZID=Europe/London:20210210T110000
DTEND;TZID=Europe/London:20210210T123000
UID:TALK156724AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/156724
DESCRIPTION:Variational Bayes is a useful approximate inferenc
e framework in which an intractable posterior dist
ribution is approximated by simpler tractable one.
\nThe extent to which this is useful (usually) de
pends on how closely this approximation matches re
ality\, and how quickly it can be obtained. We'll
present lines of work that utilise the posteriors
of tractable models as this approximation\, and th
e interesting inference algorithms that arise in t
his setting. \n\nAlthough we'll cover all of these
in the presentation\, it will be helpful to have
some familiarity with the basics of variational Ba
yes (e.g. what the ELBO is)\, variational autoenco
ders and the idea of amortised inference\, exponen
tial families\, and Gaussian processes. A basic un
derstanding of natural gradients would also be hel
pful\, but is\nnot essential. \n \nIf you have the
time\, please read this: \nOpper\, Manfred\, and
Cédric Archambeau. "The variational Gaussian\nappr
oximation revisited." Neural computation 21.3 (200
9): 786-792.\n\nExtra reading if you have time on
your hands: \n# Bui\, Thang D.\, et al. "Partition
ed variational inference: A unified framework enco
mpassing federated and continual learning." arXiv
preprint arXiv:1811.11206 (2018).\n# Ashman\, Matt
hew\, et al. "Sparse Gaussian Process Variational
Autoencoders." arXiv preprint arXiv:2010.10177 (20
20). Khan\, Mohammad Emtiyaz\, and Didrik Nielsen.
"Fast yet simple natural-gradient descent for var
iational inference in complex models." 2018 Intern
ational Symposium on Information Theory and Its Ap
plications (ISITA). IEEE\, 2018.\n# Chang\, Paul E
.\, et al. "Fast variational learning in state-spa
ce Gaussian process models." 2020 IEEE 30th Intern
ational Workshop on Machine Learning for Signal Pr
ocessing (MLSP). IEEE\, 2020. \n# Johnson\, Matthe
w James\, et al. "Composing graphical models with
neural networks for structured representations and
fast inference." Proceedings of the 30th Internat
ional Conference on Neural Information Processing
Systems. 2016.
LOCATION:https://eng-cam.zoom.us/j/86068703738?pwd=YnFleXFQ
OE1qR1h6Vmtwbno0LzFHdz09
CONTACT:Elre Oldewage
END:VEVENT
END:VCALENDAR