University of Cambridge > Talks.cam > Signal Processing and Communications Lab Seminars > Compressible priors for high-dimensional statistics

Compressible priors for high-dimensional statistics

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Prof. Ramji Venkataramanan.

We develop a principled way of identifying probability distributions whose independent and identically distributed (iid) realizations are compressible, i.e., can be approximated as sparse. We focus on the context of Gaussian random underdetermined linear regression (GULR) problems, where compressibilityis known to ensure the success of estimators exploiting sparse regularization. We prove that many of the conventional priors revolving around probabilistic interpretations of the p-norm (p<=1) regularization algorithms are in fact incompressible in the limit of large problem sizes. To show this, we identify nontrivial undersampling regions in GULR where the simple least squares solution almost surely outperforms an oracle sparse solution, when the data is generated from a prior such as the Laplace distribution. We provide rules of thumb to characterize large families of compressible and incompressible priors based on their second and fourth moments. Generalized Gaussians and generalized Pareto distributions serve as running examples for concreteness. We then conclude with a study of the statistics of wavelet coefficients of natural images in the context of compressible priors.

This talk is part of the Signal Processing and Communications Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity