CTWatch
November 2006 A
High Productivity Computing Systems and the Path Towards Usable Petascale Computing
Piotr Luszczek, University of Tennessee
Jack Dongarra, University of Tennessee, Oak Ridge National Laboratory
Jeremy Kepner, MIT Lincoln Lab

6
Conclusions

No single test can accurately compare the performance of any of today's high-end systems let alone any of those envisioned by the HPCS program in the future. Thus, the HPCC suite stresses not only the processors, but the memory system and the interconnect. It is a better indicator of how a supercomputing system will perform across a spectrum of real-world applications. Now that the more comprehensive HPCC suite is available, it could be used in preference to comparisons and rankings based on single tests. The real utility of the HPCC benchmarks are that architectures can be described with a wider range of metrics than just flop/s from HPL. When looking only at HPL performance and the TOP500 list, inexpensive build-your-own clusters appear to be much more cost effective than more sophisticated parallel architectures. But the tests indicate that even a small percentage of random memory accesses in real applications can significantly affect the overall performance of that application on architectures not designed to minimize or hide memory latency. The HPCC tests provide users with additional information to justify policy and purchasing decisions. We expect to expand and perhaps remove some existing benchmark components as we learn more about the collection.

This work was supported in part by the DARPA, NSF, and DOE through the DARPA HPCS program under grant FA8750-04-1-0219 and SCI-0527260.
References
1 Kepner, J. “HPC productivity: An overarching view,” International Journal of High Performance Computing Applications, 18(4), November 2004.
2 Kahan, W. “The baleful effect of computer benchmarks upon applied mathematics, physics and chemistry,” The John von Neumann Lecture at the 45th Annual Meeting of SIAM, Stanford University, 1997.
3 Meuer, H. W., Strohmaier, E., Dongarra, J. J., Simon, H. D. TOP500 Supercomputer Sites, 28th edition, November 2006. (The report can be downloaded from www.netlib.org/benchmark/top500.html ).
4 Dongarra, J. J., Luszczek, P., Petitet, A. “The LINPACK benchmark: Past, present, and future,” Concurrency and Computation: Practice and Experience, 15:1-8, 2003.
5 Moore, G. E. “Cramming more components onto integrated circuits,” Electronics, 38(8), April 19 1965.
6 Dongarra, J., Luszczek, P. “Introduction to the HPC Challenge benchmark suite,” Technical Report UT-CS-05-544, University of Tennessee, 2005.
7 Luszczek, P., Dongarra, J. “High performance development for high end computing with Python Language Wrapper (PLW),” International Journal of High Perfomance Computing Applications, 2006. Accepted to Special Issue on High Productivity Languages and Models.
8 Travinin, N., Kepner, J. “pMatlab parallel Matlab library,” International Journal of High Perfomance Computing Applications, 2006. Submitted to Special Issue on High Productivity Languages and Models.
9 ANSI/IEEE Standard 754-1985. “Standard for binary floating point arithmetic,” Technical Report, Institute of Electrical and Electronics Engineers, 1985.
10 Langou, J., Langou, J., Luszczek, P., Kurzak, J., Buttari, A., Dongarra, J. “Exploiting the performance of 32 bit floating point arithmetic in obtaining 64 bit accuracy,” In Proceedings of SC06, Tampa, Florida, Nomveber 11-17 2006. See icl.utk.edu/iter-ref .
11 HPCC - icl.utk.edu/hpcc/
12 Kernighan, B. W., Ritchie, D. M.. The C Programming Language. Prentice-Hall, 1978.
13 OpenMP: Simple, portable, scalable SMP programming. www.openmp.org/ .
14 Chandra, R., Dagum, L., Kohr, D., Maydan, D., McDonald, J., Menon, R. Parallel Programming in OpenMP. Morgan Kaufmann Publishers, 2001.
15 Message Passing Interface Forum. “MPI: A Message-Passing Interface Standard,” The International Journal of Supercomputer Applications and High Performance Computing, 8, 1994.
16 Message Passing Interface Forum. MPI: A Message-Passing Interface Standard (version 1.1), 1995. Available at: www.mpi-forum.org/ .
17 Message Passing Interface Forum. MPI-2: Extensions to the Message-Passing Interface, 18 July 1997. Available at www.mpi-forum.org/docs/mpi-20.ps .

Pages: 1 2 3 4 5 6

Reference this article
Lusczek, P., Dongarra, J., Kepner, J. "Design and Implementation of the HPC Challenge Benchmark Suite ," CTWatch Quarterly, Volume 2, Number 4A, November 2006 A. http://www.ctwatch.org/quarterly/articles/2006/11/design-and-implementation-of-the-hpc-challenge-benchmark-suite/

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation (NSF).

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in articles published by the Quarterly or appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.