Autotuning Numerical Dense Linear Algebra for Batched Computation With GPU Hardware Accelerators

TitleAutotuning Numerical Dense Linear Algebra for Batched Computation With GPU Hardware Accelerators
Publication TypeJournal Article
Year of Publication2018
AuthorsDongarra, J., M. Gates, J. Kurzak, P. Luszczek, and Y. Tsai
JournalProceedings of the IEEE
Volume106
Issue11
Pagination2040–2055
Date Published2018-11
KeywordsDense numerical linear algebra, performance autotuning
Abstract

Computational problems in engineering and scientific disciplines often rely on the solution of many instances of small systems of linear equations, which are called batched solves. In this paper, we focus on the important variants of both batch Cholesky factorization and subsequent substitution. The former requires the linear system matrices to be symmetric positive definite (SPD). We describe the implementation and automated performance engineering of these kernels that implement the factorization and the two substitutions. Our target platforms are graphics processing units (GPUs), which over the past decade have become an attractive high-performance computing (HPC) target for solvers of linear systems of equations. Due to their throughput-oriented design, GPUs exhibit the highest processing rates among the available processors. However, without careful design and coding, this speed is mostly restricted to large matrix sizes. We show an automated exploration of the implementation space as well as a new data layout for the batched class of SPD solvers. Our tests involve the solution of many thousands of linear SPD systems of exactly the same size. The primary focus of our techniques is on the individual matrices in the batch that have dimensions ranging from 5-by-5 up to 100-by-100. We compare our autotuned solvers against the state-of-the-art solvers such as those provided through NVIDIA channels and publicly available in the optimized MAGMA library. The observed performance is competitive and many times superior for many practical cases. The advantage of the presented methodology lies in achieving these results in a portable manner across matrix storage formats and GPU hardware architecture platforms.

DOI10.1109/JPROC.2018.2868961
Project Tags: 
External Publication Flag: