Submitted by webmaster on
|Title||GPU-Aware Non-contiguous Data Movement In Open MPI|
|Publication Type||Conference Paper|
|Year of Publication||2016|
|Authors||Wu, W., G. Bosilca, R. vandeVaart, S. Jeaugey, and J. Dongarra|
|Conference Name||25th International Symposium on High-Performance Parallel and Distributed Computing (HPDC'16)|
|Conference Location||Kyoto, Japan|
|Keywords||datatype, gpu, hybrid architecture, MPI, non-contiguous data|
Due to better parallel density and power efficiency, GPUs have become more popular for use in scientific applications. Many of these applications are based on the ubiquitous Message Passing Interface (MPI) programming paradigm, and take advantage of non-contiguous memory layouts to exchange data between processes. However, support for efficient non-contiguous data movements for GPU-resident data is still in its infancy, imposing a negative impact on the overall application performance.
To address this shortcoming, we present a solution where we take advantage of the inherent parallelism in the datatype packing and unpacking operations. We developed a close integration between Open MPI’s stack-based datatype engine, NVIDIA’s Unied Memory Architecture and GPUDirect capabilities. In this design the datatype packing and unpacking operations are offloaded onto the GPU and handled by specialized GPU kernels, while the CPU remains the driver for data movements between nodes. By incorporating our design into the Open MPI library we have shown significantly better performance for non-contiguous GPU-resident data transfers on both shared and distributed memory machines.