Principles and Techniques of Automatic Differentiation
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.
This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending
Computing accurate derivatives of a numerical model is a crucial task in many domains of Scientific Computing, in particular for gradient-based optimization and inverse problems. Automatic Differentiation (AD) is a software technique to obtain derivatives of functions provided as programs. Given a numerical model F implemented as a program P, AD adapts or transforms P into a new program that computes derivatives of F. We show the mathematical formalization that both justifies AD and explains its limitations. We shortly describe the software analyses that allow AD tools to produce more efficient code. We focus on the adjoint mode of AD, arguably the only way to obtain gradients at a reasonable cost, and show two real Scientific Computing applications. We give a brief panorama of current AD tools and conclude on research directions.
This talk is part of the Microsoft Research Cambridge, public talks series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|