Rethinking evaluation for machine learning models
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Elre Oldewage.
Machine learning research combines both theoretical and empirical approaches. With the explosion of methods in the recent past and more focus on empirical rather than theoretically driven research, this demands even more care in evaluating these methods to enable a fair comparison. In this discussion session, we aim to highlight and discuss some of these best practices and common pitfalls of evaluating machine learning models. Given that machine learning is a relatively new field as compared to others, this requires a careful dialogue in terms of establishing best practices as compared to a set of prebaked ideas. The aim of this reading session is to enable a more careful and thorough discussion on these topics within CBL .
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|