COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
TBAAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact nobody. DMLW01 - International workshop on diffusions in machine learning: foundations, generative models, and optimisation Mirror descent is a primal-dual convex optimization method that can be tailored to the geometry of the optimization problem at hand through the choice of a strongly convex potential function. It arises as a basic primitive in a variety of applications, including large-scale optimization, machine learning, and control. We propose a variational formulation of mirror descent and of its most straightforward stochastic analogue, mirror Langevin dynamics. The main idea leverages variational principles for gradient flows to show that (1) mirror descent emerges as a closed-loop solution for a certain optimal control problem; and (2) the Bellman value function is given by the Bregman divergence between the initial condition and the global minimizer of the objective function. This talk is part of the Isaac Newton Institute Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCCIMI Short Course: Random matrices, Dyson-Schwinger equations and the topological expansion Engineers Without Borders- Cambridge: Talks OkayVBNGOther talksAdvanced Techniques - Stories, Jokes and Magic TBA Thriving with Lightness and Happiness Amidst Modern Health Challenges (in person) TBA Data-Agnostic Model Poisoning to Manipulating Federated Learning The p discrepancy suffers from the curse of dimensionality for all finite p > 1. |