BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Information Engineering Distinguished Lecture Seri
 es
SUMMARY:Fundamental Limits of Learning with Feedforward an
 d Recurrent Neural Networks - Prof. Helmut Bolcske
 i\, ETH Zurich
DTSTART;TZID=Europe/London:20220527T120000
DTEND;TZID=Europe/London:20220527T130000
UID:TALK171458AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/171458
DESCRIPTION:Deep neural networks have led to breakthrough resu
 lts in numerous practical machine learning tasks s
 uch as image classification\, image captioning\, c
 ontrol-policy-learning to play the board game Go\,
  and most recently the prediction of protein struc
 tures. In this lecture\, we will attempt  to under
 stand some of the structural and mathematical reas
 ons driving these successes. Specifically\, we stu
 dy what is possible in principle if no constraints
  are imposed on the learning algorithm and on the 
 amount and quality of training data. The guiding t
 heme will be a relation between the complexity of 
 the objects to be learned and the networks approxi
 mating them\, with the central result stating that
  universal Kolmogorov-optimality is achieved by fe
 edforward neural networks in function learning and
  by recurrent neural networks in dynamical system 
 learning. 
LOCATION:Department of Engineering - LT1
CONTACT:Prof. Ramji Venkataramanan
END:VEVENT
END:VCALENDAR
