BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Isaac Newton Institute Seminar Series
SUMMARY:Least squares estimation: Beyond Gaussian regressi
on models - Qiyang Han (University of Washington)
DTSTART;TZID=Europe/London:20180508T110000
DTEND;TZID=Europe/London:20180508T120000
UID:TALK105241AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/105241
DESCRIPTION:We study the convergence rate of the least squares
estimator (LSE) in a regression model with possib
ly heavy-tailed errors. Despite its importance in
practical applications\, theoretical understanding
of this problem has been limited. We first show t
hat from a worst-case perspective\, the convergenc
e rate of the LSE in a general non-parametric regr
ession model is given by the maximum of the Gaussi
an regression rate and the noise rate induced by t
he errors. In the more difficult statistical model
where the errors only have a second moment\, we f
urther show that the sizes of the '\;localized
envelopes'\; of the model give a sharp interpol
ation for the convergence rate of the LSE between
the worst-case rate and the (optimal) parametric r
ate. These results indicate both certain positive
and negative aspects of the LSE as an estimation p
rocedure in a heavy-tailed regression setting.&nbs
p\;The key technical innovation is a new multiplie
r inequality that sharply controls the size of the
multiplier empirical process associated with the
LSE\, which also finds applications in shape-restr
icted and sparse linear regression problems.

LOCATION:Seminar Room 2\, Newton Institute
CONTACT:INI IT
END:VEVENT
END:VCALENDAR