BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Microsoft Research Cambridge\, public talks
SUMMARY:Neural Ordinary Differential Equations - Prof Dav
id Duvenaud (University of Toronto)
DTSTART;TZID=Europe/London:20180717T150000
DTEND;TZID=Europe/London:20180717T160000
UID:TALK108436AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/108436
DESCRIPTION:We introduce a new family of deep neural network m
odels. Instead of specifying a discrete sequence o
f hidden layers\, we parameterize the derivative o
f the hidden state using a neural network. The out
put of the network is computed using a black-box d
ifferential equation solver. These continuous-dept
h models have constant memory cost\, adapt their e
valuation strategy to each input\, and can explici
tly trade numerical precision for speed. We demons
trate these properties in continuous-depth residua
l networks and continuous-time latent variable mod
els. We also construct continuous normalizing flow
s\, a generative model that can train by maximum l
ikelihood\, without partitioning or ordering the d
ata dimensions. For training\, we show how to scal
ably backpropagate through any ODE solver\, withou
t access to its internal operations. This allows e
nd-to-end training of ODEs within larger models.
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station R
oad\, Cambridge\, CB1 2FB
CONTACT:Microsoft Research Cambridge Talks Admins
END:VEVENT
END:VCALENDAR