University of Cambridge > > Isaac Newton Institute Seminar Series > Information-based methods in dynamic learning

Information-based methods in dynamic learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mustapha Amrani.

Design and Analysis of Experiments

The history of information/entropy in learning due to Blackwell, Renyi, Lindley and others is sketched. Using results of de Groot, with new proofs, we arrive at a general class of information functions which gives “expected” learning in the Bayes sense. It is shown how this is intimately connected with the theory of majorization: learning means a more peaked distribution in a majorization sense. Counter-examples show that in some real situations it is possible to un-learn in the sense of having a less peaked posterior than prior. This does not happen in the standard Gaussian case, but does in cases such as the Beta-mixed binomial. Applications are made to experimental design. With designs for non-linear and dynamic system an idea of “local learning” is defined, in which the above theory is applied locally. Some connection with ideas of “active learning” in the machine learning area is attempted.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2021, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity