COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex
On Gradient-Based Optimization: Accelerated, Stochastic and NonconvexAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact INI IT. STSW01 - Theoretical and algorithmic underpinnings of Big Data Many new theoretical challenges have arisen in the area of gradient-based optimization for large-scale statistical data analysis, driven by the needs of applications and the opportunities provided by new hardware and software platforms. I discuss several related, recent results in this area: (1) a new framework for understanding Nesterov acceleration, obtained by taking a continuous-time, Lagrangian/Hamiltonian/symplectic perspective, (2) a discussion of how to escape saddle points efficiently in nonconvex optimization, and (3) the acceleration of Langevin diffusion. This talk is part of the Isaac Newton Institute Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCU Underwater Exploration Group Junior Category Theory Seminar Study Group on a Langlands Correspondence for Loop GroupsOther talksHuntington´s disease and autophagy - insights from human and mouse model systems Structurally unravelling ATP synthase Radiocarbon as a carbon cycle tracer in the 21st century The world is not flat: towards 3D cell biology and 3D devices Prices of peers: identifying endogenous price effects between real assets Deep & Heavy: Using machine learning for boosted resonance tagging and beyond |