University of Cambridge > > Microsoft Research Cambridge, public talks > Is my model too complex? Evaluating model formulation using model reduction

Is my model too complex? Evaluating model formulation using model reduction

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

There is wide acceptance that models which seek to represent biological or environmental processes should be evaluated before they are applied. Numerous technical methods have evolved in order to address this requirement. Literature in this area is further supplemented by more philosophical discussions on the role of model evaluation/validation given that models are (nearly always) known to be approximate at best. While mechanistic models tend to be detailed, they are less detailed than the real systems they seek to describe, so judgements are being made about the appropriate level of detail within the process of model development. These judgements are difficult to test, consequently it is easy for models to become over-parameterised, potentially increasing uncertainty in predictions. Work at Nottingham has sought to address these difficulties. We propose and implement a method which explores a family of simpler (reduced) models obtained by replacing model variables with constants. The procedure iteratively searches the simpler model formulations and compares models in terms of their ability to predict observed data. Under appropriate assumptions the procedure can be implemented within a Bayesian framework enabling the results to be summarised as model probabilities and replacement probabilities for individual variables which lend themselves to mechanistic interpretation. This provides powerful diagnostic information to support model development, and can identify areas of model over-parameterisation with implications for interpretation of model results. The method has been applied to a range of different example models. In each case reduced models are identified which outperform the original full model in terms of comparisons to observations, suggesting some over-parameterisation has occurred during model development. We argue that the proposed approach is relevant to anyone involved in the development or use of process based mathematical models, especially those where understanding is encoded via empirically based relationships.

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity