CTWatch
August 2005
The Coming Era of Low Power, High-Performance Computing — Trends, Promises, and Challenges
Wu-chun Feng, Los Alamos National Laboratory

2
Low-Power HPC: The Past

Based on the above evidence, I would argue that although performance and price/performance are important, we need to focus more attention on efficiency and reliability in the coming decades. And as contended above, this translates into a substantial reduction in the power consumption of HPC systems via low-power (or power-aware) approaches. Our Green Destiny cluster was arguably one of the first such systems,4 8 9 designed in late 2001 and debuting in early 2002 as the first major instantiation of the Supercomputing in Small Spaces project.10

Green Destiny, as shown in Figure 2a, was a 240-CPU Linux-based cluster with a footprint of only five square feet and a power appetite of as little as 3.2 kW (i.e., two hairdryers). Performance-wise, it produced 101 Gflops on the Linpack benchmark, which was as fast as a 256-CPU SGI Origin 2000 at the time.11 Despite its competitive performance then,12 many still felt that Green Destiny sacrificed too much performance to achieve low power consumption, and consequently, high efficiency and unprecedented reliability, i.e., no unscheduled downtime in its 24-month lifetime while running at 7,400 feet above sea level in a dusty 85° F warehouse without any cooling, air filtration, or air humidification.

The above tradeoff is captured (in part) in Table 2, where we present the raw configuration and execution numbers of four HPC systems as well their efficiency numbers with respect to memory density, storage density, and computational efficiency relative to space and power consumption.13 As one would expect from a Formula One race car for supercomputing, the ASCI White supercomputer leads all the raw performance categories (shown in red). On the other hand, given that Green Destiny was specifically designed with low power and high efficiency in mind, it handily “wins” all the efficiency categories: Memory density, storage density, and computational efficiency relative to space and power are all two orders of magnitude better (or nearly so) than the other HPC systems, as shown in red in Table 2.


Metric HPC System Avalon Beowulf ASCI Red ASCI White Green Destiny
Year 1996 1996 2000 2002
# CPUs 140 9298 8192 240
Performance (Gflops) 18 600 2500 58
Space (ft2) 120 1600 9920 5
Power (kW) 18 1200 2000 5
DRAM (GB) 36 585 6200 150 (270 max)
Disk (TB) 0.4 2.0 160.0 4.8 (38.4 max)
DRAM Density (MB/ft2) 300 366 625 30000 (54000 max)
Disk Density (GB/ft2) 3.3 1.3 16.1 960.0 (7680 max)
Perf/Space (Mflops/ft2) 150 375 252 11600
Perf/Power (Mflops/W) 1.0 0.5 1.3 11.6

Table 2. Comparison of HPC Systems on an n-body Astrophysics Code for Galaxy Formation

Pages: 1 2 3 4 5 6 7

Reference this article
Feng, W. "The Importance of Being Low Power in High Performance Computing," CTWatch Quarterly, Volume 1, Number 3, August 2005. http://www.ctwatch.org/quarterly/articles/2005/08/the-importance-of-being-low-power-in-high-performance-computing/

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation (NSF).

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in articles published by the Quarterly or appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.