University of Cambridge > > Isaac Newton Institute Seminar Series > Solving mean-field stochastic control problems by using deep learning

Solving mean-field stochastic control problems by using deep learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

FD2W03 - Optimal control and fractional dynamics

The two famous approaches of solving stochastic control problems are Bellman’s dynamic programming and Pontryagin’s maximum principle. The dynamic programming method can be very efficient, but it works only if the system is Markov. The maximum principle, on the other hand, does not require that the system is Markov, but it has the drawback that it involves complicated backward stochastic differential equations. The mean-field systems are not Markovian a priori, but they can be made Markovian by adding to the system the Fokker-Planck equation for the law if the state.  Then we can use the dynamic programming to study optimal control of of mean-field equations. Mean-field dynamics have a lot of applications, in this talk I will represent in particular two applications: Optimal energy consumption by the cortex neural network and initial investment problems. We will apply stochastic control methods to solve the problems. Furthermore, it is sometimes difficult to find explicit solutions mathematically and therefore, we will use numerical method to find them. We will use deep learning technics to solve special cases of the above discussed problems explicitly.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity