Information Theory and Method of Types: Channels, Quantizers, and Divergences
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Konstantina Palla.
Information Theory and Machine Learning share many concept, models, and
inference methods, and in some cases offers complementary perspective on
a problem. In this RCC we consider one of the latter case: quantization (IT) or
feature extraction (ML) for classifier’s design.
We will start reviewing the original work of Shannon on noisy channel coding
and some recent results on this topic. We will continue with its dual problem,
rate distortion theory, and its implementation, the design of quantizers. We will
consider the use of different divergences for quantizer’s design and we end up
analysing its relationship with the loss function for learning the classifier.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|