CTWatch
March 2008
Urgent Computing: Exploring Supercomputing's New Role
Jack Dongarra, University of Tennessee

Our all-too-frequent sense of surprise at how fast time has passed is usually at its most acute when we come to the end of a long, collective effort, especially one which has proved personally meaningful to those who have participated. Now that we have arrived at the last issue of the Cyberinfrastructure Technology Watch Quarterly (CTWatch Quarterly), at least in its current incarnation, all of us on the CTWatch team are having this familiar experience, and in just that context. Typically this is also a good time to reflect on what has been accomplished and to think a little about where things should go next.

Back in 2004, when, in a moment of inspiration, Fran Berman saw that something like CTWatch Quarterly was needed as a complement to the Cyberinfrastructure Partnership (CIP) that the San Diego Supercomputer Center and the National Center for Supercomputing Applications were planning to form, the term “cyberinfrastructure” was still something of a buzzy neologism. The National Science Foundation, where the word was coined, had yet to fund the CIP, or to create the separate office that now bears this name. But some in the community recognized that “cyberinfrastructure” was not just marketing gloss, but that it represented a complex new reality which, in the digital age, was becoming more and more fundamental to scientific inquiry on every front. Thoroughly crosscutting, broadly interdisciplinary, and global in its reach and impact, the world of cyberinfrastructure already formed the nexus of a myriad of interconnected activities where the interests of government, private industry, and the academy all converged. Thus the spring of 2005 was a propitious time to launch a publication intended to provide a forum where the cyberinfrastructure community could discuss the opportunities, achievements, and challenges confronting it.

Looking back to review the three years and thirteen issues of CTWatch Quarterly, it is gratifying to see how many emerging developments and how much breadth of impact its pages have been able to reflect. Each installment contains illustrations of this point, but a few examples provide a happy reminder. Our issues on low power, high-performance computing (Fall 2005) and on the ramifications of the on-going revolution in multicore and heterogeneous processor architectures (Spring 2007) helped lead the way in raising awareness and educating the community on these watershed developments in the evolution of computing infrastructure. The two issues that focused on the global explosion of national and transnational cyberinfrastructure (Winter 2005, focusing on Europe, and Spring 2006, highlighting eight national projects spread across Asia, Africa, and South America) showed clearly that the era of e-Science has become ubiquitous, and that the drive to remain competitive is being reflected in significant infrastructure investments around the world. The two recent issues on cyberinfrastructure for the humanities, arts, and social sciences (Summer 2007) and on the digitally-driven transformation of scholarly communications (Fall 2007) show how the impact of cyberinfrastructure is already reaching every field and discipline across the entire curricula. Finally, in the current issue on urgent computing, guest editor Pete Beckman and his excellent group of author’s show us that the practical and societal benefits of advanced cyberinfrastructure are on the verge of becoming more immediate, more universal, and more vitally important than ever before.

Given the current state of the field and the community, it is clear to me that the relevance of CTWatch Quarterly’s mission is far from exhausted. But the question of how to carry it forward into the post-CIP future remains open. Opportunities to continue with the quarterly in its current form are being explored, and if adequate funding can somehow be secured, publication may restart in the near future. Yet although that may be the easiest path to take, it may not be the most satisfying one. Perhaps the most remarkable feature of the cyberinfrastructure community is its propensity to innovate: not doing what you’ve done before with different tools, but reflecting on the untried opportunity space that emerging technology opens up for you and trying to envision your mission and your strategy for achieving it in a more original way. We are also searching for, and may yet find, a future for CTWatch more consonant with that spirit of innovation.

As we close up shop at the current stand, it is important to recognize the people who have been instrumental in helping us to achieve such success as we have had. First and foremost, we are profoundly grateful to the remarkable collection of guest editors and outstanding authors who worked with us over the last three years; they literally gave substance to our vision of a publication that could help keep the attention of the community focused on the leading edge of the cyberinfrastructure movement. Special thanks go to Fran Berman and Thom Dunning, and the CIP organizations they lead (SDSC and NCSA, respectively), for the constant and enthusiastic support they provided throughout. We also very much appreciate the contributions, feedback and suggestions of the members of our editorial board, who took time out of their busy schedules to help keep us on course and make us better.

Finally, I want to personally thank the members of the CTWatch team: Terry Moore (Managing Editor), Scott Wells (Production Editor), David Rogers (Graphic Designer), and Don Fike (Developer). Their expertise, diligence, and collective efforts, working through every phase of the process, made a high quality production like CTWatch Quarterly seem deceptively easy. Their presence makes the possibility of future community endeavors along the line of CTWatch a very happy prospect indeed.

Introduction
Pete Beckman, Mathematics and Computer Science Division, Argonne National Laboratory

Large-scale parallel simulation and modeling have changed our world. Today, supercomputers are not just for research and development or scientific exploration; they have become an integral part of many industries. A brief look at the Top 500 list of the world’s largest supercomputers shows some of the business sectors that now rely on supercomputers: finance, entertainment and digital media, transportation, pharmaceuticals, aerospace, petroleum, and biotechnology. While supercomputing may not yet be considered commonplace, the world has embraced high-performance computation (HPC). Demand for skilled computational scientists is high, and colleges and universities are struggling to meet the need for cross-disciplinary engineers who are skilled in both computation and an applied scientific domain. It is on this stage that a new breed of high-fidelity simulations is emerging – applications that need urgent access to supercomputing resources.

For some simulations, insights gained through supercomputer computation have immediate application. Consider, for example, an HPC application that could quickly calculate the exact location and magnitude of tsunamis immediately after an undersea earthquake. Since the evacuation of local residents is both costly and potentially dangerous, promptly beginning an orderly evacuation in only those areas directly threatened could save lives. Similarly, imagine a parallel wildfire simulation that coupled weather, terrain, and fuel models and could accurately predict the path of a wildfire days in advance. Firefighters could cut firebreaks exactly where they would be most effective. For these urgent computations, late results are useless results. As the HPC community builds increasingly realistic models, applications are emerging that need on-demand computation. Looking into the future, we might imagine event-driven and data-driven HPC applications running on-demand to predict everything from where to look for a lost boater after a storm to tracking a toxic plume after an industrial or transportation accident.

Of course, as we build confidence in these emerging computations, they will move from the scientist’s workbench and into critical decision-making paths. Where will the supercomputer cycles come from? It is straightforward to imagine building a supercomputer specifically for these emerging urgent computations. Even if such a system led the Top 500 list, however, it would not be as powerful as the combined computational might of the world’s five largest computers. Aggregating the country’s largest resources to solve a critical, national-scale computational challenge could provide an order of magnitude more power than attempting to rely on a prebuilt system for on-demand computation.

Furthermore, costly public infrastructure, idle except during an emergency, is inefficient. A better approach, when practical, is to temporarily use public resources during times of crisis. For example, rather than build a nationwide set of radio towers and transmitters to disseminate emergency information, the government requires that large TV and radio stations participate in the Emergency Alert System. When public broadcasts are needed, most often in the form of localized severe weather, broadcasters are automatically interrupted, and critical information is shared with the public.

As high-fidelity computation becomes more capable in predicting the future and being used for immediate decision support, governments and local municipalities must build infrastructures that can link together the largest resources from the NSF, DOE, NASA, and the NIH and use them to run time-critical urgent computations. For embarrassingly parallel applications, we might look to the emerging market for “cloud computing.” Many of the world’s largest Internet companies have embraced a model for providing software as a service. Amazon’s elastic computing cloud (EC2), for example, can provide thousands of virtual machine images rapidly and cost effectively. For applications with relatively small network communication needs, it might be most effective for urgent, on-demand computations simply to be injected into the nation’s existing Internet infrastructure supported by Amazon, Yahoo, Google, and Microsoft.

In April 2007, an urgent computing conference at Argonne National Laboratory brought together an international group of scientists to discuss how on-demand computations for HPC might be supported and change the landscape of predictive modeling. The organizers of that workshop realized that CTWatch Quarterly would be the ideal venue for exploring this new field. This issue describes how applications, urgent-computing infrastructures, and computational resources can support this new role for computing.

Suresh Marru, School of Informatics, Indiana University
Dennis Gannon, School of Informatics, Indiana University
Suman Nadella, Computation Institute, The University of Chicago
Pete Beckman, Mathematics and Computer Science Division, Argonne National Laboratory
Daniel B. Weber, Tinker Air Force Base
Keith A. Brewster, Center for Analysis and Prediction of Storms, University of Oklahoma
Kelvin K. Droegemeier, Center for Analysis and Prediction of Storms, University of Oklahoma

1


A tornado strikes the Kansas Plains near Clearwater. Photo by Keith Brewster.

The Linked Environments for Atmospheric Discovery (LEAD)1 2 project is pioneering new approaches for integrating, modeling, and mining complex weather data and cyberinfrastructure systems to enable faster-than-real-time forecasts of mesoscale weather systems, including those than can produce tornadoes and other severe weather. Funded by the National Science Foundation Large Information Technology Research program, LEAD is a multidisciplinary effort involving nine institutions and more than 100 scientists, students, and technical staff.

Foundational to LEAD is the idea that today’s static environments for observing, predicting, and understanding mesoscale weather are fundamentally inconsistent with the manner in which such weather actually occurs – namely, with often unpredictable rapid onset and evolution, heterogeneity, and spatial and temporal intermittency. To address this inconsistency, LEAD is creating an integrated, scalable framework in which meteorological analysis tools, forecast models, and data repositories can operate as dynamically adaptive, on-demand, Grid-enabled systems. Unlike static environments, these dynamic systems can change configuration rapidly and automatically in response to weather, react to decision-driven inputs from users, initiate other processes automatically, and steer remote observing technologies to optimize data collection for the problem at hand. Although mesoscale meteorology is the particular domain to which these innovative concepts are being applied, the methodologies and infrastructures are extensible to other domains, including medicine, ecology, hydrology, geology, oceanography, and biology.

The LEAD cyberinfrastructure is based on a service-oriented architecture (SOA) in which service components can be dynamically connected and reconfigured. A Grid portal in the top tier of this SOA acts as a client to the services exposed in the LEAD system. A number of stable community applications, such as the Weather Research and Forecasting model (WRF) 3, are preinstalled on both the LEAD infrastructure and TeraGrid 4 computing resources. Shell executable applications are wrapped into Web services by using the Generic Service Toolkit (GFac) 5. When these wrapped application services are invoked with a set of input parameters, the computation is initiated on the TeraGrid computing resources; execution is monitored through Grid computing middleware provided by the Globus Toolkit 6. As shown in Figure 1, scientists construct workflows using preregistered, GFac wrapped application services to depict dataflow graphs, where the nodes of the graph represent computations and the edges represent data dependencies. GPEL 7, a workflow enactment engine based on industry standard Business Process Execution Language 8, sequences the execution of each computational task based on control and data dependencies.

Figure 1


Figure 1. Dynamic workflow – WRF ensemble forecast initialized with Assimilated Data

Pages: 1 2 3 4 5 6 7 8

Gabrielle Allen, Center for Computation & Technology and Department of Computer Science, Louisiana State University
Philip Bogden, Department of Physics, Louisiana State University
Tevfik Kosar
Archit Kulshrestha
Gayathri Namala
Sirish Tummala, Center for Computation & Technology and Department of Computer Science, Louisiana State University
Edward Seidel, Center for Computation & Technology and Department of Physics, Louisiana State University

1
Introduction

Around half the U.S. population live in coastal areas, at risk from a range of coastal hazards including hurricane winds and storm surge, floods, tornados, tsunamis and rising sea-level. While changes in sea-level occur over time scales measured in decades or more, other hazards such as hurricanes or tornados occur on timescales of days or hours, and early accurate predictions of their effects are crucial for planning and emergency response.

On the 29th August 2005, Hurricane Katrina (Fig. 1) hit New Orleans, with storm surge and flooding resulting in a tragic loss of life and destruction of property and infrastructure (Table 1). Soon after, Hurricane Rita caused similar devastation in the much less populated area of southwest Louisiana, and once again parts of New Orleans were under water. In both cases mandatory evacuations were enforced only 19 hours before the hurricanes made landfall. Speedier and more accurate analysis from prediction models could allow decision makers to evacuate earlier and with more preparation — and such hurricane prediction infrastructure is one goal of the SURA SCOOP Project.

Figure 1aFigure 1b

Figure 1. Satellite images of Hurricanes Katrina (left) and Rita (right), which made dramatic landfall on the southeast US coast in 2005. Katrina resulted in the loss of nearly 2000 lives and caused some $120 billion of property damage. The storm size at landfall was 460 miles, with 145mph winds (Category 3), and storm surges of up to 22 feet. [Image credits: MODIS Rapid Response Gallery]

The SCOOP Program1 2 is creating an open integrated network of distributed sensors, data and computer models to provide a broad array of services for applications and research involving coastal environmental prediction. At the heart of the program is a service-oriented cyberinfrastructure, which is being developed by modularizing critical components, providing standard interfaces and data descriptions, and leveraging new Grid technologies and approaches for dynamic data driven application systems3. This cyberinfrastructure includes components for data archiving, integration, translation and transport, model coupling and workflow, event notification and resource brokering.


Hurricane Katrina Hurricane Rita
Date: 23-30 Aug, 2005 Date: 18-26 Sep, 2005
Category 3 landfall (peak winds: 145 mph) on 29th Aug, 6:10 am CDT, near Buras LA. Category 3 landfall (peak winds: 120 mph) on 24th Sep, 2:40 am CDT, Texas Louisiana Border.
Voluntary evacuation New Orleans: 37 hours before landfall. Mandatory evacuation: 19 hours before landfall. Mandatory evacuation Galveston: 19 hours before landfall.
Human casualties: 1836 approx. Property damage: 120 billion, New Orleans population reduced by 50%. Property damage: 35 billion, 10% population displaced from Houston and Galveston.
Storm size (width) at landfall: 460 miles
Radius of hurricane force winds at landfall: 125 miles.
Coastal storm surge: 18-22 feet.
Storm size (width) at landfall: 410 miles
Radius of hurricane force winds at landfall: 85 miles.
Coastal storm surge: 15-20 feet.
Third most powerful hurricane to hit U.S coast. Most expensive. One of five deadliest.  

Table 1. Properties of hurricanes Katrina and Rita.

Pages: 1 2 3 4 5 6 7

Benjamin S. Kirk, NASA Lyndon B. Johnson Space Center
Grant Palmer, NASA Ames Research Center
Chun Tang, NASA Ames Research Center
William A. Wood, NASA Langley Research Center

1
1. Introduction

On February 1, 2003, the Space Shuttle Orbiter Columbia suffered catastrophic structural failure during reentry, tragically killing all seven crewmembers on board. An extensive investigation into the accident was conducted in the ensuing months and identified that foam debris-induced damage to the reinforced-carbon-carbon wing, leading edge thermal protection system was the most probable root cause of the failure. During the course of the investigation, the Columbia Accident Investigation Board (CAIB) made a number of recommendations, which NASA agreed to implement before returning the Shuttle fleet to flight.

One of these recommendations, R3.8-2, addressed the need for computer models to evaluate thermal protection system damage that may result from debris impact. It reads:

Develop, validate, and maintain physics-based computer models to evaluate Thermal Protection System damage from debris impacts. These tools should provide realistic and timely estimates of any impact damage from possible debris from any source that may ultimately impact the Orbiter. Establish impact damage thresholds that trigger responsive corrective action, such as on-orbit inspection and repair, when indicated 1.

Implementing this recommendation was no small task, and involved hundreds of personnel from NASA, Boeing, The United Space Alliance, and other organizations. The result of this effort was the creation of a family of analysis tools that are used during the course of a Shuttle flight to assess the aerothermal, thermal, and structural impacts of a given damage site. These tools necessarily cross disciplines because, ultimately, the health of the vehicle depends on the coupled interaction of these three fields. The suite of tools spans the range of complexity from closed-form, analytical models to three-dimensional, chemical nonequilibrium Navier-Stokes simulation of geometrically complex configurations.

The focus of this article is to overview the damage assessment process, which is now a standard part of every Shuttle mission. The primary focus will be one aspect of this process, namely the rapid development of high-fidelity, aerothermal environments for a specific damage configuration using computational fluid dynamic (CFD) models 2 3. The application of such models requires immediate and reliable access to massively parallel computers and a high degree of automation in order to meet a very aggressive schedule. The remainder of this article is outlined as follows: Section 2 provides an overview of the damage assessment process and required timeline, Section 3 describes the role of high-performance computing in rapidly generating aerothermal environments and associated challenges, Section 4 details the specific example of damage that occurred on STS-118 during the summer of 2007, and Section 5 provides some observations and general conclusions, which may be applicable to any process which demands urgent computational simulation.

2. Typical Flight Support Scenario
2.1 Data Acquisition

NASA and its commercial partners instituted a number of process and data-acquisition improvements during the two-and-a-half year lapse between the Columbia tragedy and Discovery’s historic return-to-flight mission. These improvements were specifically designed to identify and assess the severity of damage sustained to the thermal protection system during launch and on-orbit operations. The majority of such damage has historically been caused by foam or ice shed from the Orbiter/External Tank/Solid Rocket Booster ascent stack, but a limited amount of damage has also been attributed to micrometeor and orbital debris hypervelocity impacts.

A number of ground and air-based imagery assets provide video coverage of the vehicle’s ascent to orbit. These imagery data are intensely reviewed during the hours after launch to identify potential debris strike events. Multi-band radar assets are also deployed on land and at sea during the launch phase to identify any off-nominal signatures, which may be related to debris impact. Additionally, the wing-leading-edge structural subsystem of each Orbiter was instrumented with a suite of accelerometers to aid in the detection of potential debris strikes.

Once the vehicle is in orbit, there are additional procedures that are executed to help identify potential damage. On the second day of flight, two crewmembers perform a detailed scan of the reinforced-carbon-carbon wing leading edge and nose cap. This scan is specifically designed to detect very small damages that could potentially cascade into a catastrophic failure sequence during the extremely high temperatures of reentry.

Figure 1

Figure 1. Composite image of Obiter lower surface taken during Discovery’s return-to-flight.

Pages: 1 2 3 4 5 6 7

Steven Manos
Stefan Zasada
Peter V. Coveney, Centre for Computational Science, Chemistry Department, University College London

1
Overview

Patient-specific medicine is the tailoring of medical treatments based on the characteristics of an individual patient. Decision support systems based on patient-specific simulation hold the potential of revolutionising the way clinicians plan courses of treatment for various conditions, such as viral infections and lung cancer, and the planning of surgical procedures, for example in the treatment of arterial abnormalities. Since patient-specific data can be used as the basis of simulation, treatments can be assessed for their effectiveness with respect to the patient in question before being administered, saving the potential expense of ineffective treatments and reducing, if not eliminating, lengthy lab procedures that typically involve animal testing.

In this article we explore the technical, clinical and policy requirements for three distinct patient-specific biomedical projects currently taking place: the patient-specific modelling of HIV/AIDS therapies, cancer therapies, and addressing neuro-pathologies in the intracranial vasculature. These patient-specific medical simulations require access to both appropriate patient data and the computational and network infrastructure on which to perform potentially very large-scale simulations. The computational resources required are supercomputers, machines with thousands of cores and large memory capacities capable of running simulations within the time frames required in a clinical setting; the validity of results not only relies on the correctness of the simulation, but on its timeliness. Existing supercomputing site policies, which institute ‘fair share’ system usage, are not suitable for medical applications as they stand. To support patient-specific medical simulations, where life and death decisions may be made, computational resource providers must give urgent priority to such jobs, and/or facilitate the advance reservation of such resources, akin to booking and prioritising pathology lab testing.

1. Introduction

Recent advances in advance reservation and cross-site run capabilities on supercomputers mean that, for the first time, computation can be envisaged in more than a scientific research capacity so far as biomedicine is concerned. One area where this is especially true is in the clinical decision-making process; the application of large-scale computation to offer real-time support for clinical decision-making is now becoming feasible. The ability to utilise biomedical data to optimise patient-specific treatment means that, in the future, the effectiveness of a range of potential treatments may be assessed before they are actually administered, preventing the patient from experiencing unnecessary or ineffective treatments. This should provide a substantial benefit to medicine and hence to the quality of life of human beings.

Traditional medical practice requires a physician to use judgement and experience to decide on the course of treatment best suited to an individual patient’s condition. While the training and experience of physicians hone their ability to decide the most effective treatment for a particular ailment from the range available, this decision making process often does not take into account all of the data potentially available. Indeed in many cases, the sheer volume or nature of the data available makes it impossible for a human to process as part of their decision making process, and is therefore discarded. For example, in the treatment of HIV/AIDS, the complex variation inherent within data generated by analysis of viral genotype resulting in a prediction of phenotype (in terms of viral sensitivity to a number of treatments) makes the selection of treatment for a particular patient based on these predictions fairly subjective.

Pages: 1 2 3 4 5 6 7 8 9

Karla Atkins
Christopher L. Barrett
Richard Beckman
Keith Bisset
Jiangzhou Chen
Stephen Eubank
Annette Feng
Xizhou Feng
Steven D. Harris
Bryan Lewis
V.S. Anil Kumar
Madhav V. Marathe
Achla Marathe
Henning Mortveit
Paula Stretz, Network Dynamics and Simulation Science Laboratory, Virginia Bio-Informatics Institute, Virginia Polytechnic Institute and State University

1
Introduction

This article describes our ongoing efforts to develop a global modeling, information & decision support cyberinfrastructure (CI) that will provide scientists and engineers novel ways to study large complex socio-technical systems. It consists of the following components:

  1. High-resolution scalable models of complex socio-technical systems
  2. Service-oriented architecture and delivery mechanism for facilitating the use of these models by domain experts
  3. Distributed coordinating architecture for information fusion, model execution and data processing
  4. Scalable data management architecture and system to support model execution and analytics
  5. Scalable methods for visual and data analytics to support analysts

To guide the initial development of our tools, we are concentrating on agent-based models of inter-dependent societal infrastructures, spanning large urban regions. Examples of such systems include: regional transportation systems; regional electric power markets and grids; the Internet; ad-hoc telecommunication, communication and computing systems; and public health services. Such systems can be viewed as organizations of organizations. Indeed, functioning societal infrastructure systems consist of several interacting public and private organizations working in concert to provide the necessary services to individuals and society. Issues related to privacy of individuals, confidentiality of data, data integrity and security all arise while developing microscopic models for such systems. See 1 2 3 for additional discussion (also see Figure 1).

Figure 1

Figure 1. Schematic of societal infrastructure systems (adapted from 2).

The need to represent functioning population centers during complex incidents such as natural disasters and human initiated events poses a very difficult scientific and technical challenge that calls for new state-of-the-art technology. The system must be able to handle complex co-evolving networks with over 300 million agents (individuals), each with individual itineraries and movements, millions of activity locations, thousands of activity types, and hundreds of communities, each with local interdependent critical infrastructures. The system must be able to focus attention on demand and must support the needs of decision makers at various levels. The system must also support related functions such as policy analysis, planning, course-of-action analysis, incident management, and training in a variety of domains (e.g., urban evacuation management, epidemiological event management, bio-monitoring, population risk exposure estimation, logistical planning and management of isolated populations, site evacuations, interdependent infrastructure failures).

Pages: 1 2 3 4 5 6 7 8

Paul Tooby
Dong Ju Choi
Nancy Wilkins-Diehr, San Diego Supercomputer Center

1

Somewhere in Southern California a large earthquake strikes without warning, and the news media and the public clamor for information about the temblor -- Where was the epicenter? How large was the quake? What areas did it impact?

A picture is worth a thousand words – or numbers – and the San Diego Supercomputer Center (SDSC) 1 at UC San Diego is helping to provide the answers. Caltech computational seismologist Jeroen Tromp can now give the public movies that tell the story in a language that’s easy to understand, revealing waves of ground motion spreading out from the earthquake -- and he can deliver these movies in just 30 minutes with the help of a supercomputer at SDSC. But he can’t do it by submitting a job to a traditional computing batch queue and waiting hours or days for the results.

Figure 1

Figure 1. Frame from a movie of a “virtual earthquake” simulation of the type that will be run on SDSC’s new OnDemand system to support event-driven science. The movie shows the up-and-down velocity of the Earth’s surface as waves radiate out from a magnitude 4.3 earthquake centered near Beverly Hills, California. Strong blue waves indicate the surface is moving rapidly downward, while red/orange waves indicate rapid upward motion. Courtesy of Joroen Tromp, ShakeMovie, Caltech.

Tromp is an example of the new users in today’s uncertain world who require immediate access to supercomputing resources 2. To meet this need, SDSC has introduced OnDemand, a new supercomputing resource that will support event-driven science 3.

“This is the first time that an allocated National Science Foundation (NSF) TeraGrid supercomputing resource will support on-demand users for urgent science applications,” said Anke Kamrath, director of User Services at SDSC. “In opening this new computing paradigm we’ve had to develop novel ways of handling this type of allocation as well as scheduling and job handling procedures.”

Pages: 1 2 3

Reference this article
Tooby, P., Ju Choi, D., Wilkins-Diehr, N. "Supercomputing On Demand: SDSC Supports Event-Driven Science," CTWatch Quarterly, Volume 4, Number 1, March 2008. http://www.ctwatch.org/quarterly/articles/2008/03/supercomputing-on-demand-sdsc-supports-event-driven-science/

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation (NSF).

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in articles published by the Quarterly or appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.