BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning @ CUED
SUMMARY:Two Approximate Sampling Methods for Bayesian Deep
  Learning - Wesley Maddox (New York University)
DTSTART;TZID=Europe/London:20190823T110000
DTEND;TZID=Europe/London:20190823T120000
UID:TALK129037AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/129037
DESCRIPTION:Deep neural networks have become popular for many 
 tasks\, especially object classification\, in comp
 uter vision and machine learning. However\, these 
 classes of models are known to be have poor uncert
 ainty representations – e.g. they do not know what
  they do not know. To address this challenge\, we 
 propose two Bayesian approaches to approximate the
  posterior distribution of the models' parameters.
  The first\, termed stochastic weight averaging Ga
 ussian (SWAG)\, fits a Gaussian approximation arou
 nd the iterates of the stochastic gradient descent
  trajectory from standard training of DNNs. The se
 cond\, subspace inference\, instead reduces the hi
 gh dimensionality of DNNs to very low dimensions\,
  before performing Bayesian model averaging in tha
 t low dimensional subspace. Both methods draw on e
 xisting theory and are demonstrated to have strong
  empirical results on both regression and classifi
 cation\, scaling to even ImageNet-sized datasets.
LOCATION:Engineering Department\, CBL Room BE-438.
CONTACT:Adrià Garriga Alonso
END:VEVENT
END:VCALENDAR
