Sharp bounds for compressive learning
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact .
The first part of the talk will derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomly-projected data. We make no restrictive assumptions on the data—such as sparsity, separability, or distributional assumptions. Instead, by using elementary
techniques, we derive the exact probability of label flipping under Gaussian random projection and use this to bound the effect of random projection on the generalisation error of the compressive classifier.
The second part of the talk will present an analogous strategy for regression. This provides a new analysis of the excess risk of compressive linear least squares, which removes a spurious log(N) factor from previous bounds (where N is the number of training points). In addition to tightness, these new bounds have a clear interpretation and reveal meaningful structural properties of the learning problem that make them solvable effectively in a small dimensional random subspace.
This talk is part of the Statistics series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|