Submitted by webmaster on
Title | Batch QR Factorization on GPUs: Design, Optimization, and Tuning |
Publication Type | Book Chapter |
Year of Publication | 2022 |
Authors | Abdelfattah, A., S. Tomov, and J. Dongarra |
Editor | Groen, D., C. de Mulatier, M. PaszyĆski, V. V. Krzhizhanovskaya, J. J. Dongarra, and P. M. A. Sloot |
Book Title | Lecture Notes in Computer Science |
Volume | 13350 |
Date Published | 2022-06 |
Publisher | Springer International Publishing |
City | Cham |
ISBN Number | 978-3-031-08750-9 |
Keywords | Batch linear algebra, GPU computing, QR factorization |
Abstract | QR factorization of dense matrices is a ubiquitous tool in high performance computing (HPC). From solving linear systems and least squares problems to eigenvalue problems, and singular value decompositions, the impact of a high performance QR factorization is fundamental to computer simulations and many applications. More importantly, the QR factorization on a batch of relatively small matrices has acquired a lot of attention in sparse direct solvers and low-rank approximations for Hierarchical matrices. To address this interest and demand, we developed and present a high performance batch QR factorization for Graphics Processing Units (GPUs). We present a multi-level blocking strategy that adjusts various algorithmic designs to the size of the input matrices. We also show that following the LAPACK QR design convention, while still useful, is significantly outperformed by unconventional code structures that increase data reuse. The performance results show multi-fold speedups against the state of the art libraries on the latest GPU architectures from both NVIDIA and AMD. |
URL | https://link.springer.com/chapter/10.1007/978-3-031-08751-6_5 |
DOI | 10.1007/978-3-031-08751-6_5 |