Elastic deep learning through resilient collective operations,” SC-W 2023: Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis, Denver, CO, ACM, November 2023. | “
Callback-based completion notification using MPI Continuations,” Parallel Computing, vol. 21238566, issue 0225, pp. 102793, May Jan. | “
Quo Vadis MPI RMA? Towards a More Efficient Use of MPI One-Sided Communication,” EuroMPI'21, Garching, Munich Germany, 2021. (835.27 KB) | “
A Report of the MPI International Survey (Poster) , Austin, TX, EuroMPI/USA '20: 27th European MPI Users' Group Meeting, September 2020. |
HAN: A Hierarchical AutotuNed Collective Communication Framework,” IEEE Cluster Conference, Kobe, Japan, Best Paper Award, IEEE Computer Society Press, September 2020. (764.05 KB) | “
Using Advanced Vector Extensions AVX-512 for MPI Reduction,” EuroMPI/USA '20: 27th European MPI Users' Group Meeting, Austin, TX, September 2020. (634.45 KB) | “
Using Arm Scalable Vector Extension to Optimize Open MPI,” 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID 2020), Melbourne, Australia, IEEE/ACM, May 2020. (359.95 KB) | “
Give MPI Threading a Fair Chance: A Study of Multithreaded MPI Designs,” IEEE Cluster, Albuquerque, NM, IEEE, September 2019. (220.84 KB) | “
Local Rollback for Resilient MPI Applications with Application-Level Checkpointing and Message Logging,” Future Generation Computer Systems, vol. 91, pp. 450-464, February 2019. (1.16 MB) | “
A Survey of MPI Usage in the US Exascale Computing Project,” Concurrency Computation: Practice and Experience, September 2018. (359.54 KB) | “
PMIx: Process Management for Exascale Environments,” Parallel Computing, vol. 79, pp. 9â29, January 2018. | “
PMIx: Process Management for Exascale Environments,” Proceedings of the 24th European MPI Users' Group Meeting, New York, NY, USA, ACM, pp. 14:1â14:10, 2017. | “
A Report of the MPI International Survey (Poster) , Austin, TX, EuroMPI/USA '20: 27th European MPI Users' Group Meeting, September 2020. |
OMPI-X is part of ICL's involvement in the Exascale Computing Project (ECP). The ECP was established with the goals of maximizing the benefits of high-performance computing (HPC) for the United States and accelerating the development of a capable exascale computing ecosystem. Exascale refers to computing systems at least 50 times faster than the nation’s most powerful supercomputers in use today.
The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA).