University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Feedback stabilization of Autonomous Systems via Deep Neural Network Approximation

Feedback stabilization of Autonomous Systems via Deep Neural Network Approximation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

MDLW03 - Deep learning and partial differential equations

Optimal feedback stabilization of nonlinear systems  requires  knowledge of the gradient of the solution to an Hamilton-Jacobi-Bellman (HJB)  equation.  This is a computationally challenging topic, typically plagued by the high dimension of the underlying dynamical system. In our contribution we do not address the solution of the HJB equation directly.Rather we propose a framework  for computing approximating  optimal feedback gains based on a learning approach using neural networks. The approach rests on two main ingredients. First, an optimal control (learning) formulation involving an ensemble of trajectories with ‘control’ variables given by the feedback gain functions. Second, an approximation to the feedback functions  by neural networks.  Existence and convergence of optimal stabilizing neural network feedback controllers is proven. Numerical examples illustrate the performance in practice.This is joint work with Daniel Walter, Radon Institute, Linz. 

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity