Sparsity: Beyond L1
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Colorado Reed.
The L1 norm is a popular choice of regularizer when learning a mapping from inputs to outputs under a frequentist framework. This norm has the useful property of being convex and promoting sparsity. Sparse solutions are desirable when you believe that only a subset of the input features are required to generate an output e.g. when the input dimension is much larger than the number of data points. A significant body of research is dedicated to studying when such regularizers work and where they do not. These ideas can be extended to group or structured sparsity, which can be used to sparse multi kernel learning and non-linear variable selection. Finally we shall discuss sparse matrices developed by generalizing the L1 norm in particular ways, including sparse PCA and dictionary learning.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|