Rethinking Benchmarking in AI
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Marinela Parovic.
The current benchmarking paradigm in AI has many issues: benchmarks saturate quickly, are susceptible to overfitting, contain exploitable annotator artifacts, have unclear or imperfect evaluation metrics, and do not measure what we really care about. I will talk about my work in trying to rethink the way we do benchmarking in AI, specifically in natural language processing, focusing mostly on the recently launched Dynabench platform.
This talk is part of the Language Technology Lab Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|