CTWatch
February 2006
International Cyberinfrastructure: Activities Around the Globe
Introduction
Thom Dunning, Director, National Center for Supercomputing Applications
Professor and Distinguished Chair for Research Excellence, Department of Chemistry
University of Illinois at Urbana-Champaign

Radha Nandkumar, Senior Research Scientist
Program Director, International Affiliations and Campus Relations
National Center for Supercomputing Applications

1
Introduction

Cyberinfrastructure is now essential for advancing scientific discovery and the state-of-the-art in engineering. It doesn’t matter whether it’s the inner workings of the universe or the inner workings of the economy, the design of a new chemical process or the design of a new material, new insights into how cells function or the delivery of personalized medicine, the spawning of a tornado or planning urban development. The basic fact remains the same—cyberinfrastructure is now a driver of science and engineering. Without it, science and engineering will not reach their full potential.

But, science and engineering is a global activity. There is not an American chemistry and a French chemistry, nor is there a Japanese electrical engineering and a Brazilian electrical engineering. Scientists and engineers around the globe are focused on unraveling the secrets of nature and applying this hard gained knowledge to the betterment of humanity. Cyberinfrastructure must support this global activity. In fact, it is our belief that cyberinfrastructure, properly designed and constructed, will advance science and engineering as a global activity by facilitating access to resources and expertise wherever they are located.

There are three intertwined strands of a global infrastructure:

Cyberenvironments: to provide researchers with the ability to access, integrate, automate, and manage complex, collaborative projects across disciplinary as well as geographical boundaries.

Cyber-resources: to ensure that the most demanding scientific and engineering problems can be solved and that the solutions are obtained in a timely manner.

Cybereducation: to ensure that the benefits of the national cyberinfrastructure are made available to educators and students throughout the country and the world.

NSF’s latest version of "Cyberinfrastructure Vision for 21st Century Discovery" was released on January 20, 2006. One of the guiding principles in this vision is "national and international partnerships, public and private, that integrate CI users and providers and benefit NSF’s research and education communities are … essential for enabling next-generation science and engineering."1

During his keynote address2 at NCSA’s 20th Anniversary Celebration in January 2006 entitled, "Un-common sense: A recipe for a cyber planet," Dr. Arden Bement, Director of the National Science Foundation, remarked that "cyberinfrastructure will take research and education to a new plane of discovery. It is critical for advancing knowledge in the face of a dynamic and changing global technological environment." In discussing issues related to global competition and sustaining the long history of technological leadership that the US has enjoyed, Dr. Bement provided some uncommon-sense advice: "We should pursue more global involvement, not less. The rapid spread of computers and information tools compels us to join hands across borders and disciplines if we want to stay in the race."

For this issue of CTWatch Quarterly, we invited articles from several key influential trend-setters across the globe and asked them to provide their vision for cyberinfrastructure in their respective environs and for extending them in an international context. (The November 2005 issue of CTWatch Quarterly3 focused on cyberinfrastructure in Europe.)

Pages: 1 2

John O’Callaghan, Australian Partnership for Advanced Computing

1
1. Introduction to APAC

The Australian Partnership for Advanced Computing (APAC) was established by the Australian Government to strengthen the advanced computing capabilities in Australia.

It is now a national partnership of eight organisations, one in each State as well as ANU 1 and CSIRO2. The State-based partners are joint ventures involving most of the Australian Universities and have received strong support from the State Governments and their members. All eight APAC partner organizations are listed at the end of the article.

APAC established the APAC National Facility in 2001 to provide a world-class peak computing facility for Australian researchers in higher education institutions. It also initiated programs to significantly increase the expertise and skills in partner organisations to support users of advanced computing systems.

In recent years, the Federal Government has supported APAC broadening its role to provide an advanced computing, information and grid infrastructure for the Australian research community. The APAC National Grid is allowing researchers to access distributed computation and information facilities as a single virtual system and is providing a new range of services to support research collaboration, nationally and internationally.

APAC’s vision is for Australian research teams to have seamless access to distributed computation and information facilities as part of the global research infrastructure. This vision is aligned with recent Government and institutional initiatives that focus on eResearch. For example, the Australian Government has established a National Collaborative Research Infrastructure Strategy (NCRIS)1, which provides a coordinated approach to the deployment and support of Australia’s research infrastructure.

This paper outlines the concept and activities of the APAC National Grid. More information on APAC can be found at its website2.

2. The Concept of the APAC National Grid

The APAC National Facility which is hosted at the ANU provides advanced computing services and specialist support to over 750 users around Australia 3. Most of these users are allocated resources by ‘merit-based’ granting schemes.

The peak system at the National Facility is an SGI Altix 3700 Bx2 cluster with 1680 processors - it was ranked number 35 in the Top500 list of November 2005. The facility also houses a mass data storage system based on Sun servers running SAM-QFS and a Storagetek tape silo with petabyte capacity. The system supports a number of ‘data intensive’ projects including some in linguistics and the social sciences.

In addition to this facility, the partners manage separate facilities and play a vital role in developing Australia’s capability in advanced computing, information and grid infrastructure. They provide operational advanced computing services to their users and are involved in research, development, education, training and outreach activities.

Pages: 1 2 3 4 5 6

Marco A Raupp and Bruno Schulze, National Laboratory for Scientific Computing, LNCC
Michael Stanton and Nelson Simões, National Research and Education Network, RNP

1
Introduction

Long-distance, high-speed and low-cost networking has encouraged the development of applications taking advantage of geographically distributed resources, opening up new research directions that were previously limited or unexplored for economic and practical reasons. The establishment of Cyberinfrastructure allows mature, scalable computing for different application communities.

Grid initiatives in Brazil were initially driven by international collaborations in several application areas and the pursuit of higher network bandwidth and larger computational facilities. In response to this demand, the National Laboratory for Scientific Computing – LNCC headed a proposal formulated together with representatives of application groups, computer and computational science, computer networking, high-performance computing, and federal government funding agencies belonging to the Ministry of Science and Technology (MCT). This early proposal was based on a number of existing international initiatives and focused on improving connectivity and communication performance for the coordinated use of existing regional HPC centers, as well as a number of academic and research institutions as potential users. At this time there were seven regional HPC centers and nine academic and research institutions involved, with connectivity provided by the Brazilian National Research and Education Network – RNP, and funding by the MCT funding agencies. The application areas included high-energy physics, bioinformatics, climate and weather forecasting and oil industry needs, among others.

In 2003, the legal framework was established for a National System for HPC (SINAPAD) with LNCC being designated the national coordinator (on behalf of MCT), and was recognized as part of the Federal Education, Science & Technology infrastructure. SINAPAD consisted of a network of regionally distributed, operational HPC centers aimed at providing computation on demand for education and scientific purposes, with a proposed operational structure based on mid-sized computer systems and clusters organized into a grid for the sharing of resources and reduction of idle time. Clusters of a few hundred processors were planned for each center, combining distributed and shared memory machines, facilities for data storage and handling, user friendly access through a web portal, security, accounting, and the option of several alternative architectures.

Pages: 1 2 3 4 5

N. Mohan Ram, Chief Investigator – GARUDA
S. Ramakrishnan, Director General – C-DAC

1
1. Introduction

GARUDA1 is a collaboration of science researchers and experimenters on a nationwide grid of computational nodes, mass storage and scientific instruments that aims to provide the technological advances required to enable data and compute intensive science for the 21st century. One of GARUDA's most important challenges is to strike the right balance between research and the daunting task of deploying innovation into some of the most complex scientific and engineering endeavors being undertaken today.

Building a commanding position in Grid computing is crucial for India. By allowing researchers to easily access supercomputer-level processing power and knowledge resources, grids will underpin progress in Indian science, engineering and business. The challenge facing India today is to turn technologies developed for researchers into industrial strength business tools.

The Department of Information Technology2 (DIT), Government of India has funded the Centre for Development of Advanced Computing3 (C-DAC) to deploy the nationwide computational grid ‘GARUDA' which will connect 17 cities across the country in its Proof of Concept (PoC) phase with an aim to bring "Grid" networked computing to research labs and industry. GARUDA will accelerate India's drive to turn its substantial research investment into tangible economic benefits.

2. Objectives

GARUDA aims at strengthening and advancing scientific and technological excellence in the area of Grid and Peer-to-Peer technologies. The strategic objectives of GARUDA are to:

  • Create a test bed for the research and engineering of technologies, architectures, standards and applications in Grid Computing
  • Bring together all potential research, development and user groups who can help develop a national initiative on Grid computing
  • Create the foundation for the next generation grids by addressing long term research issues in the strategic areas of knowledge and data management, programming models, architectures, grid management and monitoring, problem solving environments, grid tools and services

The following key deliverables have been identified as important to achieving the GARUDA objectives:

  • Grid tools and services to provide an integrated infrastructure to applications and higher-level layers
  • A Pan-Indian communication fabric to provide seamless and high-speed access to resources
  • Aggregation of resources including compute clusters, storage and scientific instruments
  • Creation of a consortium to collaborate on grid computing and contribute towards the aggregation of resources
  • Grid enablement and deployment of select applications of national importance requiring aggregation of distributed resources

To achieve the above objectives, GARUDA brings together a critical mass of well-established researchers from 45 research laboratories and academic institutions that have formulated an ambitious program of activities.

Pages: 1 2 3 4

Masao Sakauchi, Shigeki Yamada, Noboru Sonehara, Shigeo Urushidani, Jun Adachi, Kazunobu Konishi, National Institute of Informatics (NII), Tokyo, Japan
Satoshi Matuoka, Tokyo Institute of Technology / NII

1
1. Introduction

The Cyber Science Infrastructure (CSI) is a new comprehensive framework in which Japanese universities and research institutions are collaboratively constructing an information technology (IT) based environment to boost scientific research and education activities. Various initiatives are reorganized and included in CSI, such as the national research grid initiative, the university PKI and authentication system initiative, and the academic digital contents projects, as well as the project for a next-generation high-speed network. CSI was launched in late 2004 as a collaborative effort of leading universities, research institutions, and the National Institute of Informatics (NII).

NII is an interuniversity research institution that was established in April 2000 to conduct comprehensive research on informatics; it is the only academic research institution dedicated to informatics and IT. The Institute also been assigned a pivotal role in developing a scientific information and networking infrastructure for Japan. Therefore, NII also has a service operation arm for networking and providing scholarly information. NII puts priority on developing cutting-edge technologies for networks and information services and maintaining an infrastructure that will improve the networking and information environment of Japanese universities and research institutions.

In this article, we describe the current CSI activities being coordinated by NII as a collaborative effort with Japanese universities and research institutions.

2. Cyber Science Infrastructure as Next-generation Academic Information Environment

For more than 20 years, NII and its predecessor institution, NACSIS, have provided a network and information service infrastructure for Japanese universities and research institutions, with the budgetary support of Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT). In this undertaking, NII is responsible for planning a better information environment and being a coordinator that can meet the diverse expectations of research communities and higher education institutions.
 
After the corporatization of Japanese national universities in April 2004, which significantly impacted research and educational communities and brought about the transformation of university administration systems, NII began a new network coordination initiative with leading universities and research communities that were shifting towards IT-based research. NII believes that some sort of core body is necessary to lead an initiative that deals with issues shared by many research and educational institutions. In addition to the network coordination, it considers the following three goals to be urgent for the Japanese research community:

  • Design and deployment of a next-generation high-speed network for research institutions and operation of this network as a stable information infrastructure for research and education activities,
  • Development of scholarly databases and digital libraries, enabling the dissemination of scholarly information from research institutions, and
  • Promotion of informatics research jointly undertaken with universities.

Accomplishing these goals is NII's most important mission, so NII has integrated its activities for developing information infrastructures into the Cyber Science Infrastructure (CSI) initiative, incorporating the prospect of cooperating with researchers outside NII who share these three goals. The executive organization for network planning and coordination was created by NII in early 2005. It identifies CSI as a joint initiative among universities and research institutions aimed at evolving the nation's scientific information infrastructure.

In 2005, the CSI initiative obtained support from MEXT and the Council for Science and Technology Policy of the Japanese government and emphasized CSI's importance as a national initiative by ensuring that NII's budget in the fiscal year 2006 would include funds for CSI. The executive framework for development and dissemination of scholarly information is also part of this initiative, as is the framework for network-related activities, which include the next-generation high-speed optical network, the national research grid initiative, and PKI implementation.

Pages: 1 2 3 4 5

Hyeongwoo Park, Pillwoo Lee, Jongsuk Ruth Lee, Sungho Kim, Jaiseung Kwak, Kum Won Cho, Sang-Beom Lim, Jysoo Lee
KISTI (Korea Institute of Science and Technology Information) Supercomputing Center, Korea

1
1. Introduction

The number of subscribers for high-speed internet service in Korea reached more than twelve million at the end of 2005. This was the result of a national project, VSIN (Very high Speed Information Network), which was launched in 1995 by MIC (Ministry of Information and Communication) of Korea. The most notable result was the completion of a nation-wide optical communication cable infrastructure. It has provided high-speed communication networks to most commercial buildings and apartments in Korea. The Korean government then built internet services upon the infrastructure, which enabled e-government services, e-commerce services, and other IT application services with low cost and high quality.

The governmental budget for science and technology research and development reached 7 trillion won (about $7 billion US) in 2005. After the success of VSIN, the Korean government has tried to enhance the competitiveness of science and technology, such as bio-technology and nano-technology, by introducing VSIN and advanced IT technologies, such as the Grid, to the research processes of science and technology, which lags relatively behind developed countries. As a part of the plan, an initiative for the comprehensive implementation of Korean national Grid infrastructure (K*Grid) was started in 2002 by MIC.

KISTI (Korea Institute of Science and Technology Information) plays a leading role in construction and operation of the production quality Grid infrastructure needed for large-scale collaborative research in scientific and commercial applications. The main goal of the K*Grid infrastructure, which integrates huge amounts of computing power, massive storage systems, and experimental facilities as a virtual single system, is to provide an extremely powerful research environment for both industries and academia. The K*Grid project includes construction of the K*Grid infrastructure including Access Grid, development of its middleware, and research and development of Grid applications.

In this article, the current status and activities about the construction and utilization of the cyberinfrastructure in Korea is described.

Pages: 1 2 3 4 5

Rob Adam, Director General's Office - Department of Science and Technology, Pretoria
Cheryl de la Rey, Deputy Vice-Chancellor's Office, Research & Innovation - University of Cape Town
Kevin J. Naidoo, Computational Chemistry, Department of Chemistry - University of Cape Town
Daya Reddy, Centre for Research in Computational & Applied Mechanics - University of Cape Town

1
1. Introduction

In his 2002 State of the Nation address, President Thabo Mbeki of South Africa singled out Information and Communication Technology (ICT) as "a critical and pervasive element in economic development," and recommended the establishment of an "ICT University." The National Research and Development Strategy of South Africa had earlier also clearly identified ICT as one of the key technology platforms of the modern age, and one which has a central place in initiatives aimed at promoting development in South Africa.

The vision presented by President Mbeki has taken concrete form in the establishment of the Meraka Institute, the purpose of which is to facilitate national economic and social development through human resource development and needs-based research and innovation, leading in turn to products, expertise and services related to Information and Communication Technologies (ICT).

The Centre for High Performance Computing (CHPC)1 is a component of the Meraka Institute. This article describes the objectives and structure of the CHPC and the progress that has been made to date in the establishment of this facility.

2. Background

South Africa is currently in the throes of expanding its scientific research and innovation base and is at the same time identifying focal directions, many of which have a direct link to social and economic development . While the National R&D Strategy sets the framework, there was a recognition that an ICT strategy was needed to chart a comprehensive national approach to ICT R&D in order to maximise its potential economic contribution. Through a co-ordinated national approach, a country like South Africa could not only develop the critical mass to boost it own national development, but also achieve international competitiveness in identified focal areas.

The overall purpose of the national ICT Strategy is to create an enabling environment for the advancement of ICT R&D and Innovation in identified domains. Computational Science and High Performance Computing are two of these. This stems from the firm recognition that access to high performance computing facilities is of central importance to the success of the technology missions identified in the National R&D Strategy. Key examples in this regard are Biotechnology, particularly with reference to research into the major infectious diseases such as HIV/AIDS and tuberculosis, advanced manufacturing technology (e.g., computational simulations of design and manufacturing processes, and computational materials design), technologies to utilise and protect our natural resources and ensure food security (e.g., climate systems analysis and disaster forecasting), and technology for poverty reduction (e.g., behavioural modelling in social research; financial management; HPC in SMEs). Similarly, a number of science missions were identified in the R&D Strategy as standing to benefit from the establishment of an HPC; examples are the Square Kilometre Array (SKA), the National Bioinformatics Network (NBN) and Global Earth Observing System of Systems (GEOSS). High Performance Computing is therefore clearly perceived, in relevant national strategic plans, to be a platform for scientific and technological innovation through which the national R&D strategy can be accelerated. The dual impact of such a platform will be increased global competitiveness and improved local quality of life.

Funding for three years (2006-2008) has been secured for the high performance computing initiative. In addition, parallel investment in a South African National Research Network (SANReN), intended to provide high bandwidth connectivity for South African researchers, has been planned.

Pages: 1 2 3 4

Whey-Fone Tsai, Fang Pang Lin, Weicheng Huang, Steven Shiau, Ming Hsiao Lee, Alex Wu, John Clegg, National Center for High-Performance Computing, Taiwan

1
Introduction

The Knowledge Innovation National Grid (KING) project (Figure 1) began as an initiative under the "Challenge 2008" program, a comprehensive six-year National Development Plan formulated by the Taiwan government in 2002. The objective of the KING project (2003-2006) is to deploy a Grid infrastructure and conduct innovative pilot applications. KING's twin project, the TaiWan Advanced Research and Education Network (TWAREN, 2003-2007) (Figure 2), is a world-class, island-wide, next-generation research and education network. The KING and TWAREN initiatives form the kernel of Taiwan's Cyberinfrastructure and provide an advanced and collaborative environment to our national research, government, and industrial communities. In the first stage of the project (2003-2006), we will deploy the necessary Grid resources and develop the required support technologies. We will then launch our Grid services beginning 2007.

Figure 1

Figure 1. The Application-Driven Project, KING.

Figure 2

Figure 2. The TWAREN Network.

Pages: 1 2 3 4 5 6 7

Peter Arzberger, University of California, San Diego
Philip Papadopoulos, San Diego Supercomputer Center and University of California, San Diego

1
Introduction

Science is a global enterprise. Its conduct transcends geographic, discipline, and educational levels. The routine ability to work with experts around the world, to use resources distributed in space across international boundaries, and to share and integrate different types of data, knowledge and technology is becoming more realistic. It is the development and deployment of compatible cyberinfrastructures (a.k.a Grid) that link together computers, data stores, and observational equipment via networks and middleware that form the operative IT backbone of international science teams. While, large community projects exist that exploit the Grid (e.g. Large Hadron Collider)1 , international collaboration can and most likely will take place at scales of smaller teams. For example, a multidisciplinary, distributed team of researchers from the University of Zurich, the University of California San Diego, and Monash University in Australia are synthesizing application and grid middleware, using distributed computational resources from the PRAGMA testbed 2, to gain understanding of complex biochemical reactions than can impact the design of new drugs 3 4 5 5a 6. This example and others 7 8 9 demonstrate the value and potential of working with the emerging cyberinfrastructure. Yet, significant effort was required to bring these tools, people and resources together. The current challenge for the Grid community is to make this potential and demonstration a reality, on a routine basis.

Pacific Rim Application and Grid Middleware Assembly (PRAGMA)

Established in 2002, the Pacific Rim Application and Grid Middleware Assembly 10 is an open organization whose focus is how to practically create, support and sustain international science and technology collaborations. Specific experiments are postulated, candidate technologies and people are identified to support these experiments, evaluation is performed in our trans-Pacific routine-use laboratory, and successful solutions are integrated into country-specific software stacks or Global Grid Forum 11 standards. The group harnesses the ingenuity of more than 100 individuals from 25 institutions to create and sustain these long-term activities. PRAGMA plays a critical role as an international conduit for personal interactions, ideas, information, and grid technology. Our multi-faceted framework for collaboration catalyzes and enables new activities because of a culture of openness to new ideas. Our pragmatic approach has lead to new scientific insights 3, enhanced technology 12 13 14, and a fundamental sharing of experiences manifest in our routine-use laboratory.

PRAGMA began with the following observations: global science communities were emerging in increasing numbers; grid software had entered its second phase of implementation; and international networks were expanding rapidly in capacity as fundamental high-speed enablers for data and video communication. But, the integration and productive use of these potential tools was “out of reach” to many scientists. To address the issue of making this technology routinely accessible, a founding set of Pacific Rim Institutions began to work together to share ideas, challenges, software, and possible end-to-end solutions.

Our common-sense approach begins with prospective collaborative science-driven projects (like whole genome annotation, quantum chemistry dynamics, Australian Savannah wildfire simulation, and remote control of large electron microscopes coupled with 3D tomographic reconstruction) so that both people and candidate technologies can be identified to address the scientific needs. Identification is through people-to-people networking, progressively more sophisticated demonstrations, tutorials on software components (e.g. gFarm, MOGAS15, Nimrod, Rocks16, Ninf-G 17 and others) and a consistent face-to-face workshop schedule. When enough ingredients are available to start down the pathway of using the Grid, integrating software to be grid-aware, and/or sharing data, then software is instanced onto our routine-use laboratory. This lab (described in more detail below with its evolution and management challenge described in 18) is where technologists from multiple organizations work together to provide a baseline infrastructure for evaluation. Successful science projects can move to larger resource pools if needed. The entire end-to-end process is possible because of an active international steering committee that continually focuses the group’s multiple efforts for tangible results. Below we describe and illustrate these key components of PRAGMA, together with software distribution and community building.

Pages: 1 2 3

Reference this article
Arzberger, P., Papadopoulos, P. "PRAGMA: Example of Grass-Roots Grid Promoting Collaborative e-Science Teams," CTWatch Quarterly, Volume 2, Number 1, February 2006. http://www.ctwatch.org/quarterly/articles/2006/02/pragma-example-of-grass-roots-grid-promoting-collaborative-e-science-teams/

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation (NSF).

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in articles published by the Quarterly or appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.