Submitted by webmaster on
Title | PTG: An Abstraction for Unhindered Parallelism |
Publication Type | Conference Paper |
Year of Publication | 2014 |
Authors | Danalis, A., G. Bosilca, A. Bouteiller, T. Herault, and J. Dongarra |
Conference Name | International Workshop on Domain-Specific Languages and High-Level Frameworks for High Performance Computing (WOLFHPC) |
Date Published | 2014-11 |
Publisher | IEEE Press |
Conference Location | New Orleans, LA |
Keywords | dte, parsec, plasma |
Abstract | Increased parallelism and use of heterogeneous computing resources is now an established trend in High Performance Computing (HPC), a trend that, looking forward to Exascale, seems bound to intensify. Despite the evolution of hardware over the past decade, the programming paradigm of choice was invariably derived from Coarse Grain Parallelism with explicit data movements. We argue that message passing has remained the de facto standard in HPC because, until now, the ever increasing challenges that application developers had to address to create efficient portable applications remained manageable for expert programmers. Data-flow based programming is an alternative approach with significant potential. In this paper, we discuss the Parameterized Task Graph (PTG) abstraction and present the specialized input language that we use to specify PTGs in our data-flow task-based runtime system, PaRSEC. This language and the corresponding execution model are in contrast with the execution model of explicit message passing as well as the model of alternative task based runtime systems. The Parameterized Task Graph language decouples the expression of the parallelism in the algorithm from the control-flow ordering, load balance, and data distribution. Thus, programs are more adaptable and map more efficiently on challenging hardware, as well as maintain portability across diverse architectures. To support these claims, we discuss the different challenges of HPC programming and how PaRSEC can address them, and we demonstrate that in today’s large scale supercomputers, PaRSEC can significantly outperform state-of-the-art MPI applications and libraries, a trend that will increase with future architectural evolution. |