University of Cambridge > Talks.cam > Microsoft Research Cambridge, public talks > Taming GPU threads with F# and Alea.GPU

Taming GPU threads with F# and Alea.GPU

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending

Writing GPU kernel code which optimally exploits parallelism and the GPU architecture is the most challenging and time-consuming aspect of GPU software development. Programmers have to identify algorithms suitable for parallelization and while implementing them reason about deadlocks, synchronization, race conditions, shared memory layout, plurality of state, granularity, throughput, latency and memory bottlenecks. This means that new languages with professional tooling which increase the productivity of GPU software development, whilst retaining the full flexibility of the underlying GPU programming model CUDA or OpenCL, are of tremendous value. In this talk we introduce the upcoming version 2 of Alea.GPU, a high productivity GPU development tool chain for .NET. We show how GPU scripting, dynamic compilation and unique features of the F# language can be leveraged to reduce the development effort for cross platform GPU accelerated applications. Finally we look at our new reactive dataflow model, which simplifies GPU computing further.

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity