BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Isaac Newton Institute Seminar Series
SUMMARY:Certified dimension reduction of the input paramet
er space of vector-valued functions - Olivier Zahm
(Massachusetts Institute of Technology)
DTSTART;TZID=Europe/London:20180308T144500
DTEND;TZID=Europe/London:20180308T153000
UID:TALK102061AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/102061
DESCRIPTION:Co-authors: Paul Constantine (University of
Colorado)\, Clé\;mentine Prieur (Universit
y Joseph Fourier)\, Youssef Marzouk (MIT)
Approximation of multivariate funct
ions is a difficult task when the number of input
parameters is large. Identifying the directions wh
ere the function does not significantly vary is a
key preprocessing step to reduce the complexity of
the approximation algorithms.
Among other
dimensionality reduction tools\, the active subsp
ace is defined by means of the gradient of a scala
r-valued function. It can be interpreted as the su
bspace in the parameter space where the gradient v
aries the most. In this talk\, we propose a natura
l extension of the active subspace for vector-valu
ed functions\, e.g. functions with multiple scalar
-valued outputs or functions taking values in func
tion spaces. Our methodology consists in minimizin
g an upper-bound of the approximation error obtain
ed using Poincaré\;-type inequalities.
<
span>
We also compare the proposed gradient-bas
ed approach with the popular and widely used trunc
ated Karhunen-Loè\;ve decomposition (KL). We
show that\, from a theoretical perspective\, the
truncated KL can be interpreted as a method which
minimizes a looser upper bound of the error compar
ed to the one we derived. Also\, numerical compari
sons show that better dimension reduction can be o
btained provided gradients of the function are ava
ilable. \;
LOCATION:Seminar Room 1\, Newton Institute
CONTACT:INI IT
END:VEVENT
END:VCALENDAR