University of Cambridge > Talks.cam > Computer Laboratory Opera Group Seminars > Reduced plug-in rules for learning

Reduced plug-in rules for learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Minor Gordon.

Performance obtained via regularised risk minimisation-based algorithms have triggered their use as standard tools for many learning tasks. However, such algorithms can be found to be difficult to apply in specific situations, e.g. when the experimental setup involves large datasets. We study alternatives to this risk minimisation framework, which are based on simple yet effective algorithmic designs. The focus of this talk will mainly be on statistical classification. The example of a methodology is derived, to discriminate between two classes known through a set of data distributed according to different probability density functions. A decision rule is built as the plug-in of a kernel rule, defined on a small subset of the learning set. This framework allows for fast yet accurate estimates of the optimal classification rule. Convergence sanity checks, as well as initial simulation results, will be presented. Extensions to other approaches will also be discussed.

This talk is part of the Computer Laboratory Opera Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity