BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Frontiers in Artificial Intelligence Series
SUMMARY:How good is your classifier? Revisiting the role o
 f evaluation metrics in machine learning - Sanmi K
 oyejo\, University of Illinois 
DTSTART;TZID=Europe/London:20190731T110000
DTEND;TZID=Europe/London:20190731T120000
UID:TALK128293AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/128293
DESCRIPTION:With the increasing integration of machine learnin
 g into real systems\, it is crucial that trained m
 odels are optimized to reflect real-world tradeoff
 s. Increasing interest in proper evaluation has le
 d to a wide variety of metrics employed in practic
 e\, often specially designed by experts. However\,
  modern training strategies have not kept up with 
 the\nexplosion of metrics\, leaving practitioners 
 to resort to heuristics.\nTo address this shortcom
 ing\, I will present a simple\, yet consistent pos
 t-processing rule which improves the performance o
 f trained binary\, multilabel\, and multioutput cl
 assifiers. Building on these results\, I will prop
 ose a framework for metric elicitation\, which add
 resses the broader question of how one might selec
 t an evaluation metric for real\nworld problems so
  that it reflects true preferences.\n
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station R
 oad\, Cambridge\, CB1 2FB
CONTACT:Microsoft Research Cambridge Talks Admins
END:VEVENT
END:VCALENDAR
