University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Mean-field Markov Decision process with common noise and randomized controls: convergence rate and applications to targeted advertising

Mean-field Markov Decision process with common noise and randomized controls: convergence rate and applications to targeted advertising

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

FD2W03 - Optimal control and fractional dynamics

We develop an exhaustive study of Markov decision process (MDP) under mean field interaction both on states and actions in the presence of common noise, and when optimization is performed over open-loop controls on infinite horizon.  We highlight the crucial role of relaxed controls for this class of models, called CMKV -MDP for conditional McKean-Vlasov MDP , with respect to classical MDP theory. We prove the correspondence between CMKV -MDP and a general lifted MDP on the space of probability measures, and establish the dynamic programming Bellman fixed point equation satisfied by the value function, as well as the existence of ε-optimal randomized feedback controls.  We obtain the propagation of chaos of the optimal value functions of the N-agent MDP to the CMKVMDP when N → +∞, with some convergence rate, denoted by O(MNγ ).  We finally provide examples of application of the propagation of chaos result, by approximately solving several toy models for N-agent targeted advertising problem with social influence via the resolution of the associated CMKV -MDP. Based on joint work with Médéric Motte (LPSM). 

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity