BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:The algorithmic infinite monkey theorem - Ard Louis (Department of
  Physics\, University of Oxford)
DTSTART:20240604T120000Z
DTEND:20240604T130000Z
UID:TALK215338@talks.cam.ac.uk
CONTACT:Sarah Loos
DESCRIPTION:The classic infinite monkey theorem\, often expressed in terms
  of monkeys on typewriters\, is a popular trope in science and in popular 
 culture.  It predicts that the probability of an output sequence drops exp
 onentially with its length.  While many systems in science and engineering
  can be abstracted as input-output maps\, the classic infinite monkey theo
 rem doesn’t apply because the inputs are processed before outputs are pr
 oduced.  A better intuition may come from the algorithmic monkey theorem\,
  where monkeys type at random in a computer language. Then the probability
  P(X) that the monkeys produce output X is not determined by the length of
  X\, but rather by the length of the shortest program that can generate X.
   This simple picture is formalised in the coding theorem of algorithmic i
 nformation theory (AIT) where the length of the shortest program is the Ko
 lmogorov complexity.  The AIT coding theorem can be applied to the self-as
 sembly of finite-sized complexes\, where it predicts that symmetric struct
 ures are much more likely to appear upon random design of interacting part
 icles. Similarly\, for RNA folding\, it predicts that secondary structures
  that are highly compressible are much more likely to appear upon random m
 utations.  In evolution this picture from AIT predicts that phenotypic var
 iation is strongly biased towards simple phenotypes\, which can be illustr
 ated with Richard Dawkin’s biomorphs model of development. This non-adap
 tive argument may capture the main reason why simple leaves are much more 
 frequent than complex leaves in phylogenies of angiosperms.  In machine le
 arning this picture predicts an exponential bias towards simple functions 
 which can help explain why heavily overparameterized neural networks don
 ’t overfit. 
LOCATION:Center for Mathematical Sciences\, Lecture room MR4
END:VEVENT
END:VCALENDAR
