COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning @ CUED > Robust Deep Learning Under Distribution Shift
Robust Deep Learning Under Distribution ShiftAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Adrian Weller. We might hope that when faced with unexpected inputs, well-designed software systems would fire off warnings. However, ML systems, which depend strongly on properties of their inputs (e.g. the i.i.d. assumption), tend to fail silently. Faced with distribution shift, we wish (i) to detect and (ii) to quantify the shift, and (iii) to correct our classifiers on the fly—when possible. This talk will describe a line of recent work on tackling distribution shift. First, I will focus on recent work on label shift, a more classic problem, where strong assumptions enable principled methods. Then I will discuss how recent tools from generative adversarial networks have been appropriated (and misappropriated) to tackle dataset shift—characterizing and (partially) repairing a foundational flaw in the method. Finally, I will discuss new work that leverages human-in-the-loop feedback to develop classifiers that take into account causal structure in text classification problem and appear (empirically) to benefit on a battery of out-of-domain evaluations. This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsRomance Syntax Seminar CAMSED Events Athena SWANOther talksCausatives in Sanskrit A good match: gender and the physiology of love in 18th-century Spain Mineralogical Controls on Earth's Climate Clusters in light nuclei matter K-theory and motivic cohomology - 3 |