March 2008
Urgent Computing: Exploring Supercomputing's New Role
Suresh Marru, School of Informatics, Indiana University
Dennis Gannon, School of Informatics, Indiana University
Suman Nadella, Computation Institute, The University of Chicago
Pete Beckman, Mathematics and Computer Science Division, Argonne National Laboratory
Daniel B. Weber, Tinker Air Force Base
Keith A. Brewster, Center for Analysis and Prediction of Storms, University of Oklahoma
Kelvin K. Droegemeier, Center for Analysis and Prediction of Storms, University of Oklahoma


The SPRUCE portal provides a single-point of administration and authorization for urgent computing across an entire Grid. It consists of three parts:

  • The Web-based administrative interface allows privileged site administrators to create, issue, monitor, and deactivate right-of-way tokens. It features a hierarchical structure, allowing management of specific sub-domains.
  • The Web service-based user interface permits token holders to activate an urgent computing session and to manage user permissions.
  • The authentication service verifies urgent computing job submissions. A local site job manager agent queries the remote SPRUCE server to ensure that the submitting user is associated with an active token that gives permission to run urgent jobs on the given resource at the requested urgency.

Both the user interface and the authentication service communicate with the SPRUCE server via a Web services interface. External portals and workflows can become SPRUCE-enabled simply by incorporating the necessary Web service invocations. Users who prefer to use a Web-based interface can use the SPRUCE user portal. All users may monitor basic statistics such as the remaining lifetime of the token and the tokens with which they are currently associated. These interfaces need minimum additional training, making SPRUCE appropriate for emergency situations.

Proof of Concept
LEAD-SPRUCE Urgent Computing aims to predict severe weather during Spring 2007

LEAD applied some of its technology, in real time, for on-demand forecasting of severe weather during the 2007 National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Test Bed (HWT) 16, which is a multi-institutional program designed to study future analysis and prediction technologies in the context of daily operations. The HWT 2007 spring experiment wes a collaboration among university faculty and students, government scientists, NOAA and private forecasters to further our understanding and use of storm-scale, numerical weather prediction in weather forecasting. LEAD researchers and scientists in coordination with the SPRUCE Urgent Computing team were in a unique position to work with HWT participants to expose this technology to real-time forecasters, students, and research scientists. The 2007 effort addressed two important LEAD-related challenges: (1) the use of storm-resolving ensembles for specifying uncertainty in model initial conditions and quantifying uncertainty in model output, and (2) the application of dynamically adaptive, on-demand forecasts that are created automatically, or by humans, in response to existing or anticipated atmospheric conditions. A key aspect of the spring experiments was that the daily forecasts were evaluated not only by operational forecasters in the NOAA Storm Prediction Center (SPC) but by dozens of faculty and researchers who visited the Hazardous Weather Test Bed in Norman, Oklahoma during the seven-week period. SPC used a formal procedure to evaluate the daily forecasts (additional details may be found in 17).

The LEAD participation in the HWT 2007 spring experiments is described in detail in [18]. Briefly, the effort sought an initial assessment of the following:

  • Quantitative skill of storm-resolving ensemble forecasts compared to their deterministic counterparts at similar (experimental) and coarser (operational) grid spacings
  • Predictability of deep convection and organized mesoscale convective systems
  • Extent to which dynamically adaptive prediction leads to quantitative forecast improvements, possible negative consequences of adaptation, and an evaluation of strategies for making decisions regarding when, where and how to adapt
  • Ability of the TeraGrid to accommodate scheduled, on-demand and urgent computing applications that have strict quality-of-service requirements and use a substantial portion of available resources for an extended period of time

Pages: 1 2 3 4 5 6 7 8

Reference this article
Marru, S., Gannon, D., Nadella, S., Beckman, P., Weber, D. B., Brewster, K. A., Droegemeier, K. K. "LEAD Cyberinfrastructure to Track Real-Time Storms Using SPRUCE Urgent Computing," CTWatch Quarterly, Volume 4, Number 1, March 2008. http://www.ctwatch.org/quarterly/articles/2008/03/lead-cyberinfrastructure-to-track-real-time-storms-using-spruce-urgent-computing/

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation (NSF).

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in articles published by the Quarterly or appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.