University of Cambridge > > Machine Learning @ CUED > Scalable Parallel Computing with CUDA

Scalable Parallel Computing with CUDA

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Peter Orbanz.

Modern GPUs exploit massive parallelism to deliver high-performance, scalable programmable computing systems. The performance afforded by GPUs has already significantly impacted scientific and parallel computing. With increases in single-thread CPU performance slowing, the trend towards parallel computing will continue, which will have significant implications for hardware and software design. NVIDIA ’s CUDA architecture for GPU Computing provides a programmable, massively multithreaded processor that is capable of delivering performance comparable to supercomputers from only a few years ago. The CUDA scalable parallel programming model provides abstractions that are readily understood and that liberate programmers to focus on novel applications and efficient parallel algorithms.

In this talk, I will provide a brief history of the evolution of GPUs into massively-parallel, high-performance throughput processors. I will present the new NVIDIA Fermi architecture, and discuss related programming and performance implications. I will discuss the evolution and future of the CUDA programming model, and conclude by describing various strategies, software tools, and resources for effectively developing computationally demanding algorithms and applications on modern GPUs.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity