|Title||Approximate Computing for Scientific Applications|
|Publication Type||Book Chapter|
|Year of Publication||2022|
|Authors||Anzt, H., M. Casas, C. I. Malossi, E. S. Quintana-Ortí, F. Scheidegger, and S. Zhuang|
|Editor||Bosio, A., D. Ménard, and O. Sentieys|
|Book Title||Approximate Computing Techniques|
|Pagination||415 - 465|
|Publisher||Springer International Publishing|
This chapter reviews the performance benefits that result from applying (software) approximate computing to scientific applications. For this purpose, we target two particular areas, linear algebra and deep learning, with the first one selected for being ubiquitous in scientific problems and the second one for its considerable and growing number of important applications both in industry and science.
The review of linear algebra in scientific computing is focused on the iterative solution of sparse linear systems, exposing the prevalent costs of memory accesses in these methods, and demonstrating how approximate computing can help to reduce these overheads, for example, in the case of stationary solvers themselves or the application of preconditioners for the solution of sparse linear systems via Krylov subspace methods.
The discussion of deep learning is focused on the use of approximate data transfer for cutting costs of host-to-device operations, as well as the use of adaptive precision for accelerating training of classical CNN architectures. Additionally we discuss model optimization and architecture search in presence of constraints for edge devices applications.
Approximate Computing for Scientific Applications
External Publication Flag: