University of Cambridge > Talks.cam > Theory of Condensed Matter > Four Generations of High-Dimensional Neural Network Potentials

Four Generations of High-Dimensional Neural Network Potentials

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Bo Peng.

Machine learning potentials (MLPs) have become an important tool for atomistic simulations in many fields, from chemistry to materials science. The reason for the popularity of MLPs is their ability to provide very accurate energies and forces, which are essentially indistinguishable from the underlying reference electronic structure calculations. Still, the computational costs are much reduced enabling large-scale simulations of complex systems. Almost two decades ago, in 2007, the introduction of high-dimensional neural network potentials (HDNNP) by Behler and Parrinello paved the way for the application of MLPs to condensed systems containing a large number of atoms. Still, the original second-generation HDNN Ps, like most current MLPs, are based on a locality approximation of the atomic interactions that are truncated at some finite distance. Third-generation MLPs contain long-range electrostatic interactions up to infinite distance and overcome this restriction to short-range energies. Still, there are surprisingly many systems in which long-range electrostatic interactions are insufficient for a physically correct description, since non-local phenomena like long-range charge transfer are essential. Such global effects can be considered in fourth-generation HDNN Ps. In this talk the evolution of HDNN Ps will be discussed along with some key systems illustrating their applicability.

This talk is part of the Theory of Condensed Matter series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity