%0 Journal Article
%J ACM Transactions on Mathematical Software (TOMS)
%D 2021
%T A Set of Batched Basic Linear Algebra Subprograms and LAPACK Routines
%A Abdelfattah, Ahmad
%A Costa, Timothy
%A Jack Dongarra
%A Mark Gates
%A Haidar, Azzam
%A Hammarling, Sven
%A Higham, Nicholas J
%A Kurzak, Jakub
%A Piotr Luszczek
%A Stanimire Tomov
%A others
%K Computations on matrices
%K Mathematical analysis
%K Mathematics of computing
%K Numerical analysis
%X This article describes a standard API for a set of Batched Basic Linear Algebra Subprograms (Batched BLAS or BBLAS). The focus is on many independent BLAS operations on small matrices that are grouped together and processed by a single routine, called a Batched BLAS routine. The matrices are grouped together in uniformly sized groups, with just one group if all the matrices are of equal size. The aim is to provide more efficient, but portable, implementations of algorithms on high-performance many-core platforms. These include multicore and many-core CPU processors, GPUs and coprocessors, and other hardware accelerators with floating-point compute facility. As well as the standard types of single and double precision, we also include half and quadruple precision in the standard. In particular, half precision is used in many very large scale applications, such as those associated with machine learning.
%B ACM Transactions on Mathematical Software (TOMS)
%V 47
%P 1–23
%G eng
%R 10.1145/3431921
%0 Journal Article
%J The International Journal of High Performance Computing Applications
%D 2021
%T A survey of numerical linear algebra methods utilizing mixed-precision arithmetic
%A Abdelfattah, Ahmad
%A Anzt, Hartwig
%A Boman, Erik G
%A Carson, Erin
%A Cojean, Terry
%A Jack Dongarra
%A Fox, Alyson
%A Mark Gates
%A Higham, Nicholas J
%A Li, Xiaoye S
%A others
%K GPUs
%K High-performance computing
%K linear algebra
%K Mixed-precision arithmetic
%K numerical mathematics
%X The efficient utilization of mixed-precision numerical linear algebra algorithms can offer attractive acceleration to scientific computing applications. Especially with the hardware integration of low-precision special-function units designed for machine learning applications, the traditional numerical algorithms community urgently needs to reconsider the floating point formats used in the distinct operations to efficiently leverage the available compute power. In this work, we provide a comprehensive survey of mixed-precision numerical linear algebra routines, including the underlying concepts, theoretical background, and experimental results for both dense and sparse linear algebra problems.
%B The International Journal of High Performance Computing Applications
%V 35
%P 344–369
%G eng
%R 10.1177/10943420211003313