Submitted by rander39 on
Title | Accelerating Restarted GMRES with Mixed Precision Arithmetic |
Publication Type | Journal Article |
Year of Publication | 2021 |
Authors | Lindquist, N., P. Luszczek, and J. Dongarra |
Journal | IEEE Transactions on Parallel and Distributed Systems |
Date Published | 2021-06 |
Keywords | Convergence, Error correction, iterative methods, Kernel, linear systems, Stability analysis |
Abstract | The generalized minimum residual method (GMRES) is a commonly used iterative Krylov solver for sparse, non-symmetric systems of linear equations. Like other iterative solvers, data movement dominates its run time. To improve this performance, we propose running GMRES in reduced precision with key operations remaining in full precision. Additionally, we provide theoretical results linking the convergence of finite precision GMRES with classical Gram-Schmidt with reorthogonalization (CGSR) and its infinite precision counterpart which helps justify the convergence of this method to double-precision accuracy. We tested the mixed-precision approach with a variety of matrices and preconditioners on a GPU-accelerated node. Excluding the incomplete LU factorization without fill in (ILU(0)) preconditioner, we achieved average speedups ranging from 8 to 61 percent relative to comparable double-precision implementations, with the simpler preconditioners achieving the higher speedups. |
DOI | 10.1109/TPDS.2021.3090757 |