Optimizing sparse vector-matrix multiplication on GPUs
- đ¤ Speaker: Alexander Monakov, ISP-RAS and Moscow State University
- đ Date & Time: Tuesday 15 December 2009, 14:15 - 15:15
- đ Venue: FW26, Computer Laboratory
Abstract
Graphics processors are highly parallel computational units employing multiple levels of computational parallelism and memory hierarchy. Due to their high computational power they are increasingly used in scientific applications. However, optimizing algorithms for high performance on GPUs is not trivial.
We discuss optimizing sparse linear algebra on GPUs (specifically, sparse matrix-vector multiplication, SpMV, which is the most time-consuming step in many applications). We describe several known sparse matrix storage formats and present a new storage format that allows SpMV performance to be improved.
[This paper is to appear in HiPEAC’10]
About the speaker: Alexander Monakov is a PhD student at Moscow State University and an employee at Institute for System Programming of Russian Academy of Sciences (ISP-RAS), where he works on improving the GCC compiler. His interests include general-purpose GPU computing, compiler optimization technology and in particular using polyhedral model for parallelism and locality optimization.
Series This talk is part of the Computer Laboratory Programming Research Group Seminar series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge talks
- Computer Laboratory Programming Research Group Seminar
- Department of Computer Science and Technology talks and seminars
- FW26, Computer Laboratory
- Interested Talks
- School of Technology
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Tuesday 15 December 2009, 14:15-15:15