BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Isaac Newton Institute Seminar Series
SUMMARY:Optimal Approximation with Sparsely Connected Deep
Neural Networks - Gitta Kutyniok (Technische Univ
ersität Berlin)
DTSTART;TZID=Europe/London:20171030T145000
DTEND;TZID=Europe/London:20171030T154000
UID:TALK94018AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/94018
DESCRIPTION:Despite the outstanding success of deep neural net
works in real-world applications\, most of the rel
ated research is empirically driven and a mathemat
ical foundation is almost completely missing. One
central task of a neural network is to approximate
a function\, which for instance encodes a classif
ication task. In this talk\, we will be concerned
with the question\, how well a function can be app
roximated by a neural network with sparse connecti
vity. Using methods from approximation theory and
applied harmonic analysis\, we will derive a funda
mental lower bound on the sparsity of a neural net
work. By explicitly constructing neural networks b
ased on certain representation systems\, so-called
$\\alpha$-shearlets\, we will then demonstrate th
at this lower bound can in fact be attained. Fina
lly\, we present numerical experiments\, which sur
prisingly show that already the standard backpropa
gation algorithm generates deep neural networks ob
eying those optimal approximation rates. This is j
oint work with H. Bö\;lcskei (ETH Zurich)\, P.
Grohs (Uni Vienna)\, and P. Petersen (TU Berlin).
LOCATION:Seminar Room 1\, Newton Institute
CONTACT:info@newton.ac.uk
END:VEVENT
END:VCALENDAR