University of Cambridge > > Isaac Newton Institute Seminar Series > Optimal and efficient learning with random features

Optimal and efficient learning with random features

Add to your list(s) Download to your calendar using vCal

  • UserLorenzo Rosasco (Massachusetts Institute of Technology; Massachusetts Institute of Technology; Istituto Italiano di Tecnologica (IIT))
  • ClockWednesday 17 January 2018, 09:45-10:30
  • HouseSeminar Room 1, Newton Institute.

If you have a question about this talk, please contact INI IT.

STSW01 - Theoretical and algorithmic underpinnings of Big Data

Random features approaches correspond to one hidden layer neural networks with random hidden units, and can be seen as approximate kernel methods. We study the statistical and computational properties of random features within a ridge regression scheme. We prove for the first time that a number of random features much smaller than the number of data points suffices for optimal statistical error, with a corresponding huge computational gain. We further analyze faster rates under refined conditions and the potential benefit of random features chosen according to adaptive sampling schemes.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity