%0 Conference Paper %B Parallel and Distributed Processing Symposium Workshops (IPDPSW) %D 2017 %T Autotuning Batch Cholesky Factorization in CUDA with Interleaved Layout of Matrices %A Mark Gates %A Jakub Kurzak %A Piotr Luszczek %A Yu Pei %A Jack Dongarra %K batch computation %K Cholesky Factorization %K data layout %K GPU computing %K numerical linear algebra %X Batch matrix operations address the case of solving the same linear algebra problem for a very large number of very small matrices. In this paper, we focus on implementing the batch Cholesky factorization in CUDA, in single precision arithmetic, for NVIDIA GPUs. Specifically, we look into the benefits of using noncanonical data layouts, where consecutive memory locations store elements with the same row and column index in a set of consecutive matrices. We discuss a number of different implementation options and tuning parameters. We demonstrate superior performance to traditional implementations for the case of very small matrices. %B Parallel and Distributed Processing Symposium Workshops (IPDPSW) %I IEEE %C Orlando, FL %8 2017-06 %G eng %R 10.1109/IPDPSW.2017.18