Submitted by webmaster on
|Title||Performance Tuning and Optimization Techniques of Fixed and Variable Size Batched Cholesky Factorization on GPUs|
|Publication Type||Conference Paper|
|Year of Publication||2016|
|Authors||Abdelfattah, A., A. Haidar, S. Tomov, and J. Dongarra|
|Conference Name||International Conference on Computational Science (ICCS'16)|
|Conference Location||San Diego, CA|
|Keywords||batched computation, Cholesky Factorization, GPUs, Tuning|
Solving a large number of relatively small linear systems has recently drawn more attention in the HPC community, due to the importance of such computational workloads in many scientific applications, including sparse multifrontal solvers. Modern hardware accelerators and their architecture require a set of optimization techniques that are very different from the ones used in solving one relatively large matrix. In order to impose concurrency on such throughput-oriented architectures, a common practice is to batch the solution of these matrices as one task offloaded to the underlying hardware, rather than solving them individually.
This paper presents a high performance batched Cholesky factorization on large sets of relatively small matrices using Graphics Processing Units (GPUs), and addresses both fixed and variable size batched problems. We investigate various algorithm designs and optimization techniques, and show that it is essential to combine kernel design with performance tuning in order to achieve the best possible performance. We compare our approaches against state-of-the-art CPU solutions as well as GPU-based solutions using existing libraries, and show that, on a K40c GPU for example, our kernels are more than 2 faster.