University of Cambridge > Talks.cam > Applied and Computational Analysis > Efficient Computation through Tuned Approximation

Efficient Computation through Tuned Approximation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Hamza Fawzi.

Numerical software is being reconstructed to provide opportunities to tune dynamically the accuracy of computation to the requirements of the application, resulting in savings of memory, time, and energy. Floating point computation in science and engineering has a history of “oversolving” relative to requirements or worthiness for many models. So often are real datatypes defaulted to double precision that GPUs did not gain wide acceptance in simulation environments until they provided in hardware operations not required in their original domain of graphics. However, driven by performance or energy incentives, much of computational science is now reverting to employ lower precision arithmetic where possible. Many matrix operations considered at a blockwise level allow for lower precision and, in addition, many blocks can be approximated with low rank near equivalents. This leads to smaller memory footprint, which implies higher residency on memory hierarchies, leading in turn to less time and energy spent on data copying, which may even dwarf the savings from fewer and cheaper flops. We provide examples from several application domains, including a look at campaigns in geospatial statistics and seismic processing that earned Gordon Bell Prize finalist status in, resp., 2022 and 2023.

This talk is part of the Applied and Computational Analysis series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity