Archive for the ‘Middleware’ Category

Got parallel programming skills?

Thursday, August 2nd, 2007

We all know the glamour of having the fastest HPC machine, or the most nodes, or fattest pipes. But what ends up lost in the hoopla of all the hardware hype is the fact that someone has to write the code for this stuff to be even marginally useful for handling enormous computations. Herein lies one of the problems with high performance, scientific computing - not enough skilled programmers. Simply put, software development isn’t keeping pace with hardware development. This has been a problem for some time and still is. Writing code and programming applications (from middleware to debuggers) that enable a large computational, data intensive problem to be broken into parts that are solved individually and then reassembled into a single solution is non-trivial. Though a little dated, Susan Graham and Marc Snir, of Cal Berkeley and Illinois, Ubana-Champaign respectively, touched on this still relevant problem in their February 2005 CTWatch Quarterly article “The NRC Report on the Future of Supercomputing.” Gregory Wilson, a CS professor, gets a little more specific in “Where’s the Real Bottleneck in Scientific Computing?” from American Scientist. A more recent discussion of the lag in software development can be found in Doug Post’s keynote talk “The Opportunities and Challenges for Computational Science and Engineering” from the inauguration of the new Virtual Institute - High Productivity Supercomputing (VI-HPS).

The “Grid”

Wednesday, July 18th, 2007

You’ve read about it and might have even heard some talk about it - the Grid. So you want to know what it is and why it matters? We can help. It was a TV miniseries, but the “Grid” we’re referring to here has to do with a distributed computational resource. Ian Foster provided a nice description back in 2002 called What is the Grid? A Three Point Checklist. For an even more comprehensive explanation of the Grid, visit NCSA’s What is the Grid? website, where a Who’s Who of the high performance computing community answer many pivotal questions regarding the Grid and its use.

Re-visiting the Semantic Grid

Monday, February 6th, 2006

Back in May 2005, readers were reminded (and some informed for the first time) about the Semantic Grid effort that’s been underway since 2001. Recently, IST Results did a piece on the Semantic Grid by touting more of the potential commercial benefit of such a resource. A significant component of the Semantic Grid, a methodologically sound technological infrastructure, is being addressed by the OntoGrid Project.

More indepth information on both the Semantic Grid and the OntoGrid project can be found in this article.

Standards overload

Thursday, November 3rd, 2005

Standards are good. But too many are bad. Such is the case with open standards for the Grid. In this article from Grid Computing Planet, standards are touted as one of the reasons for slower adoption of Grids, or at least slower migration from academia to the business enterprise. With the proliferation of web services, grid management tools are becoming more important as the article also touches on the lack of consensus for Globus as the way to go in Grid middleware.

The Semantic Grid

Tuesday, May 31st, 2005

Widespread talk of the Semantic Grid seems to have cooled over the last couple of years. However, it is still under active development and moving along nicely. The formal effort began in 2001 as part of the e-Science program in the UK to reach a goal of semantic interoperability with an infrastructure

where all resources, including services, are adequately described in a form that is machine-processable….the Semantic Grid is an extension of the current Grid in which information and services are given well-defined meaning, better enabling computers and people to work in cooperation (from the Semantic Grid website).

Development seems to be gaining considerable speed as a greater number of research initiatives related to grid computing are underway. A good primer on the Semantic Grid effort can be found in this presentation (13 MB) given in Amsterdam in April by Dr. David De Roure, one of the lead researchers. Supercomputing Online also has a short piece about the effort.

Grady Booch on life at IBM

Friday, May 27th, 2005

InfoWorld has published an interesting Q & A with Grady Booch, known as a co-creator of the unified modeling language (UML). In the interview, Booch fields several questions about a variety of topics, including parallelizing software and what happens when Moore’s law expires. Though towing the company line, Booch nevertheless shares his insight into future application development and open source issues as well.

The moderators and/or administrators of this weblog reserve the right to edit or delete ANY content that appears on the site. In other words, the moderators and administrators have complete discretion over the removal of any content deemed by them to be inappropriate, in full or in part.

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation.

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.