University of Cambridge > Talks.cam > Logic and Semantics Seminar (Computer Laboratory) > Testing GPU Memory Consistency at Large

Testing GPU Memory Consistency at Large

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Jamie Vicary.

Memory consistency specifications (MCSs) are a difficult, yet critical, part of a concurrent programming framework. Existing MCS testing tools are not immediately accessible, and thus, have only been applied to a limited number of devices. However, in the post-Dennard scaling landscape, there has been an explosion of new architectures and frameworks, exemplified by graphics processing units (GPUs). Studying the shared memory semantics of these new platforms is important for understanding program behavior and ensuring conformance to framework specifications.

In this talk, I will discuss our work on widescale GPU MCS testing. We developed a new methodology, MC Mutants, which utilizes mutation testing to evaluate the effectiveness of MCS testing techniques. MC Mutants is built into an accessible testing tool, GPU Harbor, which we used to collect data from over 100 devices from seven GPU vendors. This massive testing campaign revealed bugs in several GPU compilers and provided insights into weak behavior characteristics across diverse architectures. Furthermore, these results were used to tune testing environments across different devices, allowing us to make testing portable and contribute our tests to the official conformance test suite for WebGPU. Our ongoing work is investigating how to increase the safety and security of GPU programming languages in the face of their weak shared memory guarantees, as well as the challenges and opportunities that come with evolving architectures.

This talk is part of the Logic and Semantics Seminar (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity