University of Cambridge > Talks.cam > Microsoft Research Machine Learning and Perception Seminars > Context sensitive information: Which bits matter in data?

Context sensitive information: Which bits matter in data?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending

Learning patterns in data requires to extract interesting, statistically significant regularities in (large) data sets, e.g. detection of cancer cells in tissue microarrays and estimating their staining or role mining in security permission management. Admissible solutions or hypotheses specify the context of pattern analysis problems which have to cope with model mismatch and noise in data. An information theoretic approach is developed which estimates the precision of inferred solution sets and regularizes solutions in a noise adapted way. The tradeoff between “informativeness” and “robustness” is mirrored by the balance between high information content and identifiability of solution sets, thereby giving rise to a new notion of context sensitive information. Cost function to rank solutions and, more abstractly, algorithms are considered as noisy channels with a generalization capacity. The effectiveness of this concept is demonstrated by model validation for spectral clustering based on different variants of graph cuts. The concept also enables us to measure how many bit are extracted by sorting algorithms when the input and thereby the pairwise comparisons are subject to fluctuations.

This talk is part of the Microsoft Research Machine Learning and Perception Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity