With the exception of government laboratories, universities, and a handful of aggressive and capable industrial companies, most software applications and tools are supplied by independent software vendors (ISVs). However, more than a third of these companies are very small (revenue less than $5M) and have limited resources to apply to the latest hardware innovations.4
In some cases, skills to develop new algorithms that map to new hardware are in short supply, and an ISV will be more focused on serving his or her current install base of customers than in reaching out to new technologies. Also in some cases, the new hardware may be viewed as "unproven" so the ISV will wait until there is greater market acceptance before porting the application codes to the new platform. This is not a new phenomenon: "many applications for supercomputing only scale to 32 processors and some of the most important only scale to four."5
The current Blue Gene system at Lawrence Livermore National Laboratory has 131,000 processors and the last machine on the TOP500 list, #500, has 800 processors. Furthermore, it is not uncommon for ISVs to charge license fees in proportion to the number of processors on a system (note, for the most part they do not charge license fees in proportion to the number of transistors on a chip even though more transistors and more cores attack the same object: produce more speed) giving some supercomputer customers serious sticker shock when they get their software bill.
Ultimately, the marketplace should work to resolve these issues, but they will remain serious issues for some time to come unless we see an entrepreneurial move to disturb the status quo with innovative algorithms, software and business practices that map to the capabilities of state of the art supercomputers. In the meantime, progress will continue to be made through collaborations such as the Blue Gene Consortium, the IBM-Los Alamos Partnership on Roadrunner, and the Terra Soft HPC Collaboration around the Cell BE.
While the evolution of software to better exploit multi-core architectures will unfold over time, there is a huge benefit that customers will reap and are reaping from multi-core systems today. In areas as diverse as digital media, financial services, information based medicine, oil and gas production, nanotechnology, automotive design, life sciences, material sciences, astronomy and mathematics, supercomputers have been deployed to amazing effect with material impact on the daily lives of everyone on the planet.
The ambition to get to a petaflop of computing is near universal with major efforts going on in the US, Europe and Asia for major supercomputer acquisitions in the next few years. By itself, this ambition should go a long way towards providing the stimulus to close the hardware-software gap we witness today.
2 "Computational Science: Ensuring America's Competitiveness," President's Information Technology Advisory Committee, June 2005.
3 The Green500 List – www.green500.org/
4 Joseph, E. "The Council on Competitiveness Study of ISVs Serving the High Performance Computing Market," IDC Whitepaper, www.compete.org/hpc
5 Joseph, E., ibid.