Overview

The Production-ready, Exascale-enabled Krylov Solvers for Exascale Computing (PEEKS) project will explore the redesign of solvers and extend the DOE’s Extreme-scale Algorithms and Solver Resilience (EASIR) project. Many large-scale scientific applications rely heavily on preconditioned iterative solvers for large linear systems. For these solvers to efficiently exploit extreme-scale hardware, both the solver algorithms and the implementations must be redesigned to address challenges like extreme concurrency, complex memory hierarchies, costly data movement, and heterogeneous node architectures.

The PEEKS effort aims to tackle these challenges and advance the capabilities of the ECP software stack by making the new scalable algorithms accessible within the Trilinos software ecosystem. Targeting exascale-enabled Krylov solvers, incomplete factorization routines, and parallel preconditioning techniques will ensure successful delivery of scalable Krylov solvers in robust, production-quality software that can be relied on by ECP applications.
Card image cap
2018 Poster
Download
Sponsored By
Exascale Computing Project
National Nuclear Security Administration
The United States Department of Energy

Papers

Anzt, H., G. Flegar, T. Grützmacher, and E. S. Quintana-Ortí, Toward a Modular Precision Ecosystem for High-Performance Computing,” The International Journal of High Performance Computing Applications, September 2019. DOI: 10.1177/1094342019846547  (1.93 MB)
Yamazaki, I., A. Ida, R. Yokota, and J. Dongarra, Distributed-Memory Lattice H-Matrix Factorization,” The International Journal of High Performance Computing Applications, August 2019. DOI: 10.1177/1094342019861139  (1.14 MB)
Bai, Z., J. Dongarra, D. Lu, and I. Yamazaki, Matrix Powers Kernels for Thick-Restart Lanczos with Explicit External Deflation,” International Parallel and Distributed Processing Symposium (IPDPS), May 2019.
Gruetzmacher, T., T. Cojean, G. Flegar, F. Göbel, and H. Anzt, A Customized Precision Format Based on Mantissa Segmentation for Accelerating Sparse Linear Algebra,” Concurrency and Computation: Practice and Experience, vol. 40319, issue 262, January 2019. DOI: 10.1002/cpe.5418
Anzt, H., and G. Flegar, Are we Doing the Right Thing? — A Critical Analysis of the Academic HPC Community,” 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Rio de Janeiro, Brazil, IEEE, 2019. DOI: 10.1109/IPDPSW.2019.00122  (622.32 KB)
Jagode, H., A. Danalis, H. Anzt, I. Yamazaki, M. Hoemmen, E. Boman, S. Tomov, and J. Dongarra, Software-Defined Events (SDEs) in MAGMA-Sparse,” Innovative Computing Laboratory Technical Report, no. ICL-UT-18-12: University of Tennessee, December 2018.  (481.69 KB)
Anzt, H., J. Dongarra, G. Flegar, and T. Gruetzmacher, Variable-Size Batched Condition Number Calculation on GPUs,” SBAC-PAD, Lyon, France, September 2018.  (509.3 KB)
Anzt, H., E. Chow, and J. Dongarra, ParILUT - A New Parallel Threshold ILU,” SIAM Journal on Scientific Computing, vol. 40, issue 4: SIAM, pp. C503–C519, July 2018. DOI: 10.1137/16M1079506  (19.26 MB)
Anzt, H., I. Yamazaki, M. Hoemmen, E. Boman, and J. Dongarra, Solver Interface & Performance on Cori,” Innovative Computing Laboratory Technical Report, no. ICL-UT-18-05: University of Tennessee, June 2018.  (188.05 KB)
Anzt, H., J. Dongarra, G. Flegar, N. J. Higham, and E. S. Quintana-Ortí, Adaptive Precision in Block‐Jacobi Preconditioning for Iterative Sparse Linear System Solvers,” Concurrency Computation: Practice and Experience, March 2018. DOI: 10.1002/cpe.4460
Anzt, H., and J. Dongarra, A Jaccard Weights Kernel Leveraging Independent Thread Scheduling on GPUs,” SBAC-PAD, 2018.  (237.68 KB)
Anzt, H., T. Gruetzmacher, E. Quintana-Orti, and F. Scheidegger, High-Performance GPU Implementation of PageRank with Reduced Precision based on Mantissa Segmentation,” 8th Workshop on Irregular Applications: Architectures and Algorithms, 2018.
Anzt, H., G. Collins, J. Dongarra, G. Flegar, and E. S. Quintana-Ortí, Flexible Batched Sparse Matrix-Vector Product on GPUs,” Proceedings of the 8th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA '17), Denver, Colorado, ACM Press, November 2017. DOI: 10.1145/3148226.3148230
Anzt, H., E. Boman, J. Dongarra, G. Flegar, M. Gates, M. Heroux, M. Hoemmen, J. Kurzak, P. Luszczek, S. Rajamanickam, et al., MAGMA-sparse Interface Design Whitepaper,” Innovative Computing Laboratory Technical Report, no. ICL-UT-17-05, September 2017.  (1.28 MB)
Anzt, H., J. Dongarra, G. Flegar, and E. S. Quintana-Ortí, Variable-Size Batched LU for Small Matrices and Its Integration into Block-Jacobi Preconditioning,” 46th International Conference on Parallel Processing (ICPP), Bristol, United Kingdom, IEEE, August 2017. DOI: 10.1109/ICPP.2017.18
Yamazaki, I., M. Hoemmen, P. Luszczek, and J. Dongarra, Improving Performance of GMRES by Reducing Communication and Pipelining Global Collectives,” Proceedings of The 18th IEEE International Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC 2017), Best Paper Award, Orlando, FL, June 2017.  (453.66 KB)

Presentations

Hoemmen, M., and I. Yamazaki, Production Implementations of Pipelined & Communication-Avoiding Iterative Linear Solvers , Tokyo, Japan, SIAM Conference on Parallel Processing for Scientific Computing, March 2018.  (2.34 MB)
Anzt, H., G. Collins, J. Dongarra, G. Flegar, and E. S. Quintana-Ortí, Flexible Batched Sparse Matrix Vector Product on GPUs , Denver, Colorado, ScalA'17: 8th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems, November 2017.  (16.8 MB)
Yamazaki, I., M. Hoemmen, P. Luszczek, and J. Dongarra, Comparing performance of s-step and pipelined GMRES on distributed-memory multicore CPUs , Pittsburgh, Pennsylvania, SIAM Annual Meeting, July 2017.  (748 KB)

ICL Team Members

Hartwig Anzt
Consultant
Natalie Beams
Research Scientist I
Sebastien Cayrols
Post Doctoral Research Associate
Jack Dongarra
University Distinguished Professor
Stanimire Tomov
Research Assistant Professor

In Collaboration With

Sandia National Laboratories

Exascale Computing Project

PEEKS is part of ICL's involvment in the Exascale Computing Project (ECP). The ECP was established with the goals of maximizing the benefits of high-performance computing (HPC) for the United States and accelerating the development of a capable exascale computing ecosystem. Exascale refers to computing systems at least 50 times faster than the nation’s most powerful supercomputers in use today.

The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA).