COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Applied and Computational Analysis > Feature Learning in Two-layer Neural Networks: The Effect of Data Covariance
Feature Learning in Two-layer Neural Networks: The Effect of Data CovarianceAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Nicolas Boulle. We study the effect of gradient-based optimization on feature learning in two-layer neural networks. We consider a setting where the number of samples is of the same order as the input dimension and show that, when the input data is isotropic, gradient descent always improves upon the initial random features model in terms of prediction risk, for a certain class of targets. Further leveraging the practical observation that data often contains additional structure, i.e., the input covariance has non-trivial alignment with the target, we prove that the class of learnable targets can be significantly extended, demonstrating a clear separation between kernel methods and two-layer neural networks in this regime. This talk is part of the Applied and Computational Analysis series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsChemical Engineering and Biotechnology Departmental Seminars Computer Laboratory Digital Technology Group (DTG) Meetings Inference Group SummaryOther talksThe Stars along the Silk Roads: Astronomers from Central Asia in the Abbasid Court during the ninth century Choice Principles in Observational Type Theory Water-rock reactions and the search for life Cambridge Overcoming Polarisation Initiative Rough-Ribbed Surfaces to Mitigate Losses in High Lift Low-Pressure Turbines Nonlinearity in structural dynamics: when does it matter? |