%0 Journal Article %J Proceedings of the IEEE %D 2018 %T Autotuning in High-Performance Computing Applications %A Prasanna Balaprakash %A Jack Dongarra %A Todd Gamblin %A Mary Hall %A Jeffrey Hollingsworth %A Boyana Norris %A Richard Vuduc %K High-performance computing %K performance tuning programming systems %X Autotuning refers to the automatic generation of a search space of possible implementations of a computation that are evaluated through models and/or empirical measurement to identify the most desirable implementation. Autotuning has the potential to dramatically improve the performance portability of petascale and exascale applications. To date, autotuning has been used primarily in high-performance applications through tunable libraries or previously tuned application code that is integrated directly into the application. This paper draws on the authors' extensive experience applying autotuning to high-performance applications, describing both successes and future challenges. If autotuning is to be widely used in the HPC community, researchers must address the software engineering challenges, manage configuration overheads, and continue to demonstrate significant performance gains and portability across architectures. In particular, tools that configure the application must be integrated into the application build process so that tuning can be reapplied as the application and target architectures evolve. %B Proceedings of the IEEE %V 106 %P 2068–2083 %8 2018-11 %G eng %N 11 %R 10.1109/JPROC.2018.2841200 %0 Conference Paper %B ACM MultiMedia Workshop 2017 %D 2017 %T Efficient Communications in Training Large Scale Neural Networks %A Yiyang Zhao %A Linnan Wan %A Wei Wu %A George Bosilca %A Richard Vuduc %A Jinmian Ye %A Wenqi Tang %A Zenglin Xu %X We consider the problem of how to reduce the cost of communication that is required for the parallel training of a neural network. The state-of-the-art method, Bulk Synchronous Parallel Stochastic Gradient Descent (BSP-SGD), requires many collective communication operations, like broadcasts of parameters or reductions for sub-gradient aggregations, which for large messages quickly dominates overall execution time and limits parallel scalability. To address this problem, we develop a new technique for collective operations, referred to as Linear Pipelining (LP). It is tuned to the message sizes that arise in BSP-SGD, and works effectively on multi-GPU systems. Theoretically, the cost of LP is invariant to P, where P is the number of GPUs, while the cost of more conventional Minimum Spanning Tree (MST) scales like O(logP). LP also demonstrate up to 2x faster bandwidth than Bidirectional Exchange (BE) techniques that are widely adopted by current MPI implementations. We apply these collectives to BSP-SGD, showing that the proposed implementations reduce communication bottlenecks in practice while preserving the attractive convergence properties of BSP-SGD. %B ACM MultiMedia Workshop 2017 %I ACM %C Mountain View, CA %8 2017-10 %G eng