![]() |
University of Cambridge > Talks.cam > Information Theory Seminar > The Minimum Description Length Principle and Machine Learning
The Minimum Description Length Principle and Machine LearningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Prof. Ramji Venkataramanan. The Minimum Description Length (MDL) principle states that good learning can be achieved by selecting the model that provides the shortest description of the observed data. It is a key concept that bridges information theory and machine learning, enabling us to understand increasingly important machine learning problems from an information-theoretic viewpoint. In this talk, we first review methods for efficient lossless compression of data generated from an unknown probability distribution (universal coding), with a particular focus on two-stage (two-part) coding. We then introduce the MDL estimator based on two-stage codes and explain how it relates to standard learning formulations. Finally, we present a theorem by Barron and Cover that provides a generalization guarantee for this MDL estimator, thereby offering a rigorous mathematical justification for applying the MDL principle in machine learning. This talk is part of the Information Theory Seminar series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsZoology Departmental Seminar Series Milner Seminar Series Department of Earth Sciences Seminars (downtown)Other talksFrom Planets to Exoplanets: the search for life, with the James Webb Telescope and others History of Urban Form of India (book talk) Joint seminar session with two half length talks from Arman and Sam (please see 'Abstract' for talk titles) Intervening to Improve Sensitive Caregiving and Child Psychological and Physical Health. Woodland and the Medieval Economic Expansion: a Comparative Perspective on North-West Europe Session 1 Chair's Introduction |