BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Robust Estimation via Robust Gradient Estimation - Pradeep Ravikum
 ar (Carnegie Mellon University)
DTSTART:20180628T150000Z
DTEND:20180628T154500Z
UID:TALK107497@talks.cam.ac.uk
CONTACT:INI IT
DESCRIPTION:A common assumption in the training of machine learning system
 s is that the data is sufficiently clean and well-behaved: there are very 
 few or no outliers\, or that the distribution of the data does not have ve
 ry long tails. As machine learning finds wider usage\, these assumptions a
 re increasingly indefensible. The key question then is how to perform esti
 mation that is robust to departure from these assumptions\; and which has 
 been of classical interest\, with seminal contributions due to Box\, Tukey
 \, Huber\, Hampel\, and several others. Loosely\, there seemed to be a com
 putation-robustness tradeoff\, practical estimators did have strong robust
 ness guarantees\, while estimators with strong robustness guarantees were 
 computationally impractical.  In our work\, we provide a new class of comp
 utationally-efficient class of estimators for risk minimization that are p
 rovably robust to a variety of robustness settings\, such as arbitrary obl
 ivious contamination\, and heavy-tailed data\, among others. Our workhorse
  is a novel robust variant of gradient descent\, and we provide conditions
  under which our gradient descent variant provides accurate and robust est
 imators in any general convex risk minimization problem. These results pro
 vide some of the first computationally tractable and provably robust estim
 ators for general statistical models.   Joint work with Adarsh Prasad\, Ar
 un Sai Suggala\, Sivaraman Balakrishnan.
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
