University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Weighted evaluation of probabilistic forecasts

Weighted evaluation of probabilistic forecasts

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

RCLW02 - Calibrating prediction uncertainty : statistics and machine learning perspectives

The evaluation of probabilistic forecasts focuses on two aspects of forecast performance: forecast accuracy and forecast calibration. Forecast accuracy refers to how ‘close’ the forecast is to the corresponding observation, which can be quantified using proper scoring rules, while forecast calibration considers to what extent probabilistic forecasts are trustworthy. Most scoring rules and checks for calibration treat all possible outcomes equally. However, certain outcomes are often of more interest than others, and these outcomes should therefore be emphasised during forecast evaluation. For example, extreme outcomes typically lead to the largest impacts on forecast users, making accurate and calibrated forecasts for these outcomes particularly valuable. In this talk, we discuss methods to focus on particular outcomes when evaluating probabilistic forecasts. We review weighted scoring rules, which allow practitioners to incorporate a weight function into conventional scoring rules when calculating forecast accuracy, and demonstrate that the theory underlying weighted scoring rules can readily be extended to checks for forecast calibration. Just as proper scores can be decomposed to obtain a measure of forecast miscalibration, weighted scores can be decomposed to yield a measure of weighted forecast calibration.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity