Submitted by webmaster on
Title | Batched Gauss-Jordan Elimination for Block-Jacobi Preconditioner Generation on GPUs |
Publication Type | Conference Proceedings |
Year of Publication | 2017 |
Authors | Anzt, H., J. Dongarra, G. Flegar, and E. S. Quintana-Orti |
Conference Name | Proceedings of the 8th International Workshop on Programming Models and Applications for Multicores and Manycores |
Series Title | PMAM'17 |
Pagination | 1–10 |
Date Published | 2017-02 |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-4883-6 |
Keywords | block-Jacobi preconditioner, Gauss-Jordan elimination, graphics processing units (GPUs), iterative methods, matrix inversion, sparse linear systems |
Abstract | In this paper, we design and evaluate a routine for the efficient generation of block-Jacobi preconditioners on graphics processing units (GPUs). Concretely, to exploit the architecture of the graphics accelerator, we develop a batched Gauss-Jordan elimination CUDA kernel for matrix inversion that embeds an implicit pivoting technique and handles the entire inversion process in the GPU registers. In addition, we integrate extraction and insertion CUDA kernels to rapidly set up the block-Jacobi preconditioner. Our experiments compare the performance of our implementation against a sequence of batched routines from the MAGMA library realizing the inversion via the LU factorization with partial pivoting. Furthermore, we evaluate the costs of different strategies for the block-Jacobi extraction and insertion steps, using a variety of sparse matrices from the SuiteSparse matrix collection. Finally, we assess the efficiency of the complete block-Jacobi preconditioner generation in the context of an iterative solver applied to a set of computational science problems, and quantify its benefits over a scalar Jacobi preconditioner. |
URL | http://doi.acm.org/10.1145/3026937.3026940 |
DOI | 10.1145/3026937.3026940 |