# Rewarding strength, discounting weakness: combining information from multiple climate simulators

Mathematical and Statistical Approaches to Climate Modelling and Prediction

Although modern climate simulators represent our best available understanding of the climate system, projections can vary appreciably between them. Increasingly therefore, users of climate projections are advised to consider information from an “ensemble” of different simulators or “multimodel ensemble” (MME).

When analysing a MME the simplest approach is to average each quantity of interest over all simulators, possibly weighting each simulator according to some measure of “quality”. This approach has two drawbacks. Firstly, it is heuristic: results can differ between weighting schemes, leaving users little better off than before. Secondly, no simulator is uniformly better than all others: if projections of several different quantities are required the rankings of the simulators (and hence the implied weights) may differ considerably between quantities of interest.

A more sophisticated approach is to use modern statistical techniques to derive probability density functions (pdfs) for the quantities of interest. However, no systematic attempt has yet been made to sample the range of possible modelling decisions in building a MME : therefore it is not clear to what extent the resulting “probabilities” are in any way relevant to the downstream user.

This talk presents a statistical framework that addresses all of these issues, building on Leith and Chandler (2010). The emphasis is on conceptual aspects, although the framework has been applied in practice elsewhere. A mathematical analysis of the framework shows that:

(a) Information from individual simulators is automatically weighted, alongside that from historical observations and from prior knowledge. (b) The weights reflect the relative value of different information sources for each quantity of interest. Thus each simulator is rewarded for its strengths, whereas its weaknesses are discounted. (c) The weights for an individual simulator depend on its internal variability, its expected consensus with other simulators, the internal variability of the real climate, and the propensity of simulators collectively to deviate from the real climate. (d) Some subjective judgements are inevitable.

Reference: Leith, N.A. and Chandler, R.E. (2010). A framework for interpreting climate model outputs. J. R. Statist. Soc. C 59 (2): 279-296.

This talk is part of the Isaac Newton Institute Seminar Series series.