Submitted by rander39 on
|Title||Task-graph scheduling extensions for efficient synchronization and communication|
|Publication Type||Conference Proceedings|
|Year of Publication||2021|
|Authors||Bak, S., O. Hernandez, M. Gates, P. Luszczek, and V. Sarkar|
|Conference Name||Proceedings of the ACM International Conference on Supercomputing|
|Keywords||Compilers, Computing methodologies, Parallel computing methodologies, Parallel programming languages, Runtime environments, Software and its engineering, Software notations and tools|
Task graphs have been studied for decades as a foundation for scheduling irregular parallel applications and incorporated in many programming models including OpenMP. While many high-performance parallel libraries are based on task graphs, they also have additional scheduling requirements, such as synchronization within inner levels of data parallelism and internal blocking communications. In this paper, we extend task-graph scheduling to support efficient synchronization and communication within tasks. Compared to past work, our scheduler avoids deadlock and oversubscription of worker threads, and refines victim selection to increase the overlap of sibling tasks. To the best of our knowledge, our approach is the first to combine gang-scheduling and work-stealing in a single runtime. Our approach has been evaluated on the SLATE high-performance linear algebra library. Relative to the LLVM OMP runtime, our runtime demonstrates performance improvements of up to 13.82%, 15.2%, and 36.94% for LU, QR, and Cholesky, respectively, evaluated across different configurations related to matrix size, number of nodes, and use of CPUs vs GPUs.