Efficient and Structured Uncertainty: Challenges and Opportunities
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Eric T Nalisnick.
Uncertainty estimation is important for ensuring safety and robustness of AI systems. Ensembles of models yield improvements in system performance as well as principled and interpretable uncertainty estimates. However, ensemble-based uncertainty estimation comes at a computational and memory cost which may be prohibitive for many applications. This has limited both practical application and the scale of problems which are examined in research. In this talk we examine pushing the scale-limit of ensemble-based uncertainty estimation in two ways. Firstly, we introduce the task of “Ensemble Distribution Distillation”. Here, the goal is to distill an ensemble into a single model such that it emulates the ensemble, retaining both its improved predictive performance and interpretable uncertainty estimates. Secondly, we investigate principled ensemble-based uncertainty estimation for autoregressive structured prediction tasks, such as machine translation and speech recognition, an area which has received limited attention so far. Through the lens of these two scale-limits we pose possible directions of future research.
Zoom meeting: https://yandex.zoom.us/j/93618382511?pwd=cHJ0MkhZWXhobzZteG9YTUVJV25iUT09
This talk is part of the Machine Learning @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|