University of Cambridge > > Statistics > Efficient sparse recovery with no assumption on the dictionary

Efficient sparse recovery with no assumption on the dictionary

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact rbg24.

Methods of sparse statistical estimation are mainly of the two types. Some of them, like the BIC , enjoy nice theoretical properties without any assumption on the dictionary but are computationally infeasible starting from relatively modest dimensions p. Others, like the Lasso or Dantzig selector, are easily realizable for very large p but their theoretical performance is conditioned by severe restrictions on the dictionary. The aim of this talk is to propose a new method of sparse recovery in regression, density and classification models realizing a compromise between theoretical properties and computational efficiency. The theoretical performance of the method is comparable with that of the BIC in terms of sparsity oracle inequalities for the prediction risk. No assumption on the dictionary is required, except for the standard normalization. At the same time, the method is computationally feasible for relatively large dimensions p. It is constructed using the exponential weighting with suitably chosen priors, and its analysis is based on the PAC -Bayesian ideas in statistical learning. In particular, we obtain some new PAC -Bayesian bounds with leading constant 1 and we develop a general technique to derive sparsity oracle inequalities from the PAC -Bayesian bounds. This is a joint work with Arnak Dalalyan.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2019, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity