University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Deep learning as optimal control problems and Riemannian discrete gradient descent.

Deep learning as optimal control problems and Riemannian discrete gradient descent.

Add to your list(s) Download to your calendar using vCal

  • UserElena Celledoni (Norwegian University of Science and Technology; Norwegian University of Science and Technology)
  • ClockThursday 21 November 2019, 15:05-15:45
  • HouseSeminar Room 2, Newton Institute.

If you have a question about this talk, please contact INI IT.

GCS - Geometry, compatibility and structure preservation in computational differential equations

We consider recent work where deep learning neural networks have been interpreted as discretisations of an optimal control problem subject to an ordinary differential equation constraint. We review the first order conditions for optimality, and the conditions ensuring optimality after discretisation. This leads to a class of algorithms for solving the discrete optimal control problem which guarantee that the corresponding discrete necessary conditions for optimality are fulfilled. The differential equation setting lends itself to learning additional parameters such as the time discretisation. We explore this extension alongside natural constraints (e.g. time steps lie in a simplex). We compare these deep learning algorithms numerically in terms of induced flow and generalisation ability.   References   - M Benning, E Celledoni, MJ Ehrhardt, B Owren, CB Schönlieb, Deep learning as optimal control problems: models and numerical methods, JCD.




This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity