CTWatch
November 2005
E-Infrastructure: Europe Meets the e-Science Challenge
Perspectives
Craig Mundie, CTO – Microsoft Corporation

1
The Role of High Performance Computing

I have had a long history in the HPC community: I spent from 1982 to 1992 as the founder and architect of Alliant Computer Systems Corporation in Boston. We spent a long time trying to develop tools and an architecture whose components today would look like they were all fairly slow. But architecturally, many of the concepts that were explored back then by Seymour Cray and the many supercomputer companies–of which Alliant was just one–still to this day represent the basic architectures that are being reproduced and extended as Moore’s Law continues to allow these things to be compacted.

In my present role as CTO of Microsoft, it is probably fair to say that I have been the ‘god-father’ in moving Microsoft to begin to play a role in the area of technical computing. Up to now, the company has really never focused on this area. It is of course true that there are many people in the world who, whether they are in engineering or science, business or academia, use our products like tools on their desktop much like they think of pencil and paper. They would not want to work without them. But such tools are never really considered as an integral part of the mission itself. It is my belief that many of the things that HPC and supercomputing have tended to drive will become important as you look down the road of general computing architectures. The worldwide aggregate software market in technical computing is not all that large on a financial scale. However, Bill Gates and I, over the last couple years, have agreed that engaging with HPC is not just a question of how big the market is for software per se in technical computing. Rather it is a strategic market in the sense of ultimately making sure that there will be well-trained people who will come out of a university environment and help society solve the difficult problems it will be facing in the future. The global society has an increasing need to solve some very difficult large-scale problems in engineering, science, medicine and in many other fields. Microsoft has a huge research effort that has never been focused on such problems. I believe that it is time that we started to assess some application of our research technology outside of our traditional ways of using it within our own commercial products. We think that by doing so, there is a lot that can be learned about what will be the nature of future computing systems.

Many of the things that we thought of as de rigueur in terms of architectural issues and design problems in supercomputers in the late eighties and early nineties have now been shrunk down to a chip. Between 2010 and 2020 many of the things that the HPC community is focusing on today will go through a similar shrinking footprint. We will wake up one day and find that the kind of architectures that we assemble today with blades and clusters are now on a chip and being put into everything. In my work on strategy for Microsoft I have to look at the 10 to 20 year horizon rather than a one to three year horizon. The company’s entry into high performance computing is based on the belief that over the next 10 years or so, there will be a growing number of people who will want to use these kinds of technologies to solve more and more interesting problems. Another of my motivations is my belief that the problem set, even in that first 10-year period, will expand quite dramatically in terms of the types of problems where people will use these kinds of approaches.

There was a time certainly, when I was in the HPC business, that the people who wrote high performance programs were making them for consumption largely in an engineering environment. Only a few HPC codes were more broadly used in a small number of fields of academic research. Today, it is doubtful whether there is any substantive field of academic research in engineering or science that could really progress without the use of advanced computing technologies. And these technologies are not just the architecture and the megaflops but also the tools and programming environments necessary to address these problems.

Pages: 1 2 3

Reference this article
Mundie, C. "The Next Decade in HPC ," CTWatch Quarterly, Volume 1, Number 4, November 2005. http://www.ctwatch.org/quarterly/articles/2005/11/the-next-decade-in-hpc/

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation (NSF).

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in articles published by the Quarterly or appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.