COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > DAMTP Statistical Physics and Soft Matter Seminar > The algorithmic infinite monkey theorem
The algorithmic infinite monkey theoremAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Sarah Loos. The classic infinite monkey theorem, often expressed in terms of monkeys on typewriters, is a popular trope in science and in popular culture. It predicts that the probability of an output sequence drops exponentially with its length. While many systems in science and engineering can be abstracted as input-output maps, the classic infinite monkey theorem doesn’t apply because the inputs are processed before outputs are produced. A better intuition may come from the algorithmic monkey theorem, where monkeys type at random in a computer language. Then the probability P(X) that the monkeys produce output X is not determined by the length of X, but rather by the length of the shortest program that can generate X. This simple picture is formalised in the coding theorem of algorithmic information theory (AIT) where the length of the shortest program is the Kolmogorov complexity. The AIT coding theorem can be applied to the self-assembly of finite-sized complexes, where it predicts that symmetric structures are much more likely to appear upon random design of interacting particles. Similarly, for RNA folding, it predicts that secondary structures that are highly compressible are much more likely to appear upon random mutations. In evolution this picture from AIT predicts that phenotypic variation is strongly biased towards simple phenotypes, which can be illustrated with Richard Dawkin’s biomorphs model of development. This non-adaptive argument may capture the main reason why simple leaves are much more frequent than complex leaves in phylogenies of angiosperms. In machine learning this picture predicts an exponential bias towards simple functions which can help explain why heavily overparameterized neural networks don’t overfit. This talk is part of the DAMTP Statistical Physics and Soft Matter Seminar series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsSpring School 2010 - 'Axon-Glia Biology in Health and Disease' dh539 DAMTP Statistical Physics and Soft Matter SeminarOther talksWater Wave Interaction with Floating Piezoelectric Plates Director and organiser welcome Enid MacRobbie Women in Science Lecture EMG reading group on Gopakuma-Vafa invariants |