Archive for the ‘Hardware’ Category

Technical report on Blue Gene/L from IBM

Wednesday, May 4th, 2005

The latest issue of IBM’s Journal of Research and Development is devoted to Blue Gene/L. It includes an overview of the architecture, reports on various subsystems, and a rundown of the software environment. Not yet available for order! Operators are not standing by! Just look at it online, you goon.

A step toward HPC education

Monday, May 2nd, 2005

I suppose announcements of hardware acquisitions are becoming humdrum, unless you’re talking about BlueGene-level performance. But this story published today in the University of Oklahoma student daily caught my eye. First of all, the 6.5-teraflop Dell system they’re installing is being nicknamed “Topdawg,” and that’s just a cool name. And second, the system is being installed at the OU Supercomputing Center for Education and Research (OSCER). Notice that “education” comes first in that name, and in fact:

Henry Neeman, the director of Supercomputing for IT, said OSCER is the only university program of its kind in the world. The program teaches supercomputing to a wide range of people. A workshop called “Supercomputing in Plain English” teaches OU students about supercomputing through simple methods.

“We’ve focused on people who don’t know much about supercomputing,” Neeman said.

“Supercomputing in Plain English” — what a great idea!

(BTW, there is an error in the article regarding the production of the Top500 list, which is not, to my knowledge, compiled at NCSA. I’ve emailed the paper’s editor to suggest a correction. Maybe the confusion arose because NCSA’s computing environment includes a couple of Dell systems — Tungsten and T2).

Supercomputer Eye on the Universe

Wednesday, April 27th, 2005

Looks like the Europeans, the Dutch to be precise, have found an interesting use for what Reuters calls the most powerful computer in Europe in terms of sustained performance (27.4 teraflops). According to the article, which I picked up off CNN, the IBM system will:

“…process signals from up to 13 billion light years from earth — as far back in time as the beginnings of the earliest stars and galaxies after the formation of the universe.”

OSU team proposes new approach to quantum computing

Tuesday, April 26th, 2005

Greg Lafyatis, an associate professor of physics at (The) Ohio State (University), and his team recently published in Physical Review A on a new approach to quantum computing. According to Science News Daily, they:

designed a chip with a top surface of laser light that functions as an array of tiny traps, each of which could potentially hold a single atom. The design could enable quantum data to be read the same way CDs are read today.

Other research teams have created similar arrays, called optical lattices, but those designs present problems that could make them hard to use in practice. Other lattices lock atoms into a multi-layered cube floating in free space. But manipulating atoms in the center of the cube would be difficult. The Ohio State lattice has a more practical design, with a single layer of atoms grounded just above a glass chip. Each atom could be manipulated directly with a single laser beam.

(Hat tip to Slashdot.)

Europe Rising, as Spain takes Supercomputing to Church

Monday, April 18th, 2005

More reminders today, as if we needed them, that the concerned commentary in the US about the decline of federal research funding, in general, and about the decline in investment in academic Computer Science research, in particular, is taking place in a competitive environment where governments outside the US are moving ahead aggresively. This morning’s Grid Today article from Wolfgang Gentzch, “Grid Computing: How Europe is Leading the Pack,” certainly oozes confidence that Europe is on track and on time in terms of grid computing:

“So, what makes Europe so different from other national and international Grid research projects? While early Grid initiatives in Europe where mostly unrelated point efforts (as are still many Grid projects around the world today), my impression from the European Grid Conference in Amsterdam is that, first and foremost, Europe now has a long-term, coordinated and shared Grid R&D vision, mission, strategy, roadmap and funding, driven by the European Commission’s IST Framework Programmes 5, 6 and 7 (the latter will start in 2006) and hosted by its Directorate Generale (DG) for Information Society.”

This claim seems all the more salient because it expresses a sense, coming from various directions, that the process of European unification is releasing a tremendous amount of economic, cultural, and intellectual energy that used to be locked up inside national borders but now flows easily across them. A unified cyberinfrastructure for Europe is both a powerful enabler and a powerful symbol of this transformation. And it doesn’t look like ambition will be in short supply:

“The Grid, for Europe, is far more than resource sharing. It is a big step forward to build the Cyberinfrastructure for a united research community tackling the grand challenges of our universe. It is a coordinated, single economic engine preparing to compete with Asia and the United States. And it is a commitment, through the advancement of next-generation technology, to improve the quality of life for every citizen in Europe.”

So “Old Europe” is looking mighty spry, indeed. The recent news of Spain’s new supercomputer, MareNostrum, is evidence that European cyberinfrastructure activity is occurring on many fronts. As you can see in the slideshow, they’re putting this baby on a raised floor “… in a chapel on the campus of the Polytechnical University in Barcelona.” Repurposing a chapel from the 1920’s as a 21st century supercomputer machine room certainly produces an interesting juxtaposition of centuries and symbols.

High performance cluster benchmarking

Tuesday, April 12th, 2005

Supercomputing Online reports that the Texas Advanced Computing Center (TACC) at the University of Texas, Austin has teamed up with Dell to research performance issues associated with high performance computing clusters. Citing the increased use of such clusters by the HPC community, the manager of TACC’s HPC group says in the article:

With Dell’s support, we will continue to investigate and improve the performance of applications when run in these clustered computing environments and explore new techniques and algorithms for improving performance.

The moderators and/or administrators of this weblog reserve the right to edit or delete ANY content that appears on the site. In other words, the moderators and administrators have complete discretion over the removal of any content deemed by them to be inappropriate, in full or in part.

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation.

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.