CTWatch
February 2007
The Promise and Perils of the Coming Multicore Revolution and Its Impact
Jack Dongarra, Oak Ridge National Laboratory; University of Tennessee

2

John Manferdelli of Microsoft explores the situation for commercial application and system programmers, where the relative paucity of experience with parallel computing is liable to make the "shock of the new," delivered by the multicore revolution, far more severe. After presenting David Patterson's compact formulation of how the performance of serial processing has been indefinitely barred from further improvements by the combined force of the "power wall," the "memory wall," and the "ILP wall," he describes several complementary approaches for getting more concurrency into programming practice. But to be successful in helping commercial developers across the concurrency divide, these new development tools and techniques will require improvements on other fronts, most especially in operating systems. He sketches an important preview of how we may expect operating systems to be adapted for concurrency, providing new approaches to resource sharing that allow different system subcomponents to have flexible access to dedicated resources for specialized purposes.

A view from the inside of the multicore revolution, offering an extended discussion of the critical factors of "balance" and "optimization" in processor design, is provided in the article by John McCalpin, Chuck Moore, and Phil Hester of Advanced Micro Devices (AMD). Their account of motivation for adopting a multicore design strategy is much more concrete and quantitative than the previous articles, as is only appropriate for authors who had to grapple with this historic problem at first hand and as a mission critical goal. They offer a fascinating peek inside the rationale for the movement to multicore, laying out the lines of reasoning and the critical tradeoffs and considerations (including things like market trends) that lead to different design points for processor manufacturers. Against this background, the speculations they offer about what we might expect in the near and mid-range future are bound to have more credibility than similar exercises by others not so well positioned.

Finally, the article by David Turek of IBM highlights a distinctly different but equally important strand in the multicore revolution: the introduction of new hybrid multicore architectures and their application to supercomputing architectures. A conspicuous example of this significant trend is use of the Cell Broadband Engine processor, created by an industry consortium to power Sony's next generation PlayStations, in the construction of novel supercomputer designs, like the Roadrunner system at Los Alamos National Laboratory. Such hybrid systems provide convincing illustrations of how unexpected combinations of technological and economic forces in the software ecosystem can combine to produce new innovation. The dark side of this trend toward heterogeneity, however, is that it severely complicates the planning process of small ISV's, who have only scarce resources to apply to the latest hardware innovations.

It is hard to read the articles in this issue of CTWatch Quarterly without coming to the conclusion, agreed upon by many other leaders in the field, that modern software ecosystems are about to be destabilized, not to say convulsed, by a major transformation in their hardware substrate. Over time, this may actually improve the health of the software by changing people's attitudes about the value of software. There have long been complaints, especially in the HPC community, that software is substantially undervalued, with inadequate investments, beyond the initial research phase, in software hardening, enhancement, and long term maintenance. New federal programs, such as the NSF's Software Development for Cyberinfrastructure (SDCI), represent a good first step in recognizing that good software has become absolutely essential to productivity in many areas. Yet I believe the multicore revolution, which is now upon us, will drive home the need to make that recognition into a guiding principle of our national research strategy. For that reason alone, but for many others as well, the CTWatch community, and the scientific computing community in general, will miss the incandescent presence of Ken Kennedy, who died earlier this month. The historic challenges we are about to confront will require the very kind of visionary leadership for which Ken had all the right stuff, as he showed over and over again during his distinguished career. I will miss my friend.

References
1 Messerschmitt, D. G., Szyperski, C. Software ecosystem : understanding an indispensable technology and industry. Cambridge, MA: MIT Press, 2003.

Pages: 1 2

Reference this article
Dongarra, J. "Introduction," CTWatch Quarterly, Volume 3, Number 1, February 2007. http://www.ctwatch.org/quarterly/articles/2007/02/introduction/

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation (NSF).

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in articles published by the Quarterly or appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.