University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Random feature neural networks learn Black-Scholes type PDEs without curse of dimensionality

Random feature neural networks learn Black-Scholes type PDEs without curse of dimensionality

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

MDLW03 - Deep learning and partial differential equations

In this talk we consider a supervised learning problem, in which the unknown target function is the solution to a Black-Scholes PDE or a more general exponential-Lévy partial (integro-)differential equation. We analyze the learning performance of random feature neural networks in this context. Random feature neural networks are single-hidden-layer feedforward neural networks in which only the output weights are trainable. This makes training particularly simple, but (a priori) reduces expressivity. Interestingly, this is not the case for Black-Scholes type PDEs, as we show here. We derive bounds for the prediction error of random neural networks for learning sufficiently non-degenerate Black-Scholes type models. A full error analysis addressing all error components is provided and it is shown that the derived bounds do not suffer from the curse of dimensionality. We apply these results to option pricing and validate the bounds numerically.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2022 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity