Designing and Supporting Data Management and Preservation Infrastructure
Fran Berman and Reagan Moore, San Diego Supercomputer Center
CTWatch Quarterly
May 2006

1. Introduction

The 20th century brought about an “information revolution” that has forever altered the way we work, communicate, and live. In the 21st century, data is ubiquitous. Available in digital format via the web, desktop, personal device, and other venues, data collections both directly and indirectly enable a tremendous number of advances in modern science and engineering.

Today’s data collections span the spectrum in discipline, usage characteristics, size, and purpose. The life science community utilizes the continually expanding Protein Data Bank1 as a worldwide resource for studying the structures of biological macromolecules and their relationships to sequence, function, and disease. The Panel Study of Income Dynamics (PSID),2 a longitudinal study initiated in 1968, provides social scientists detailed information about more than 65,000 individuals spanning as many as 36 years of their lives. The National Virtual Observatory3 is providing an unprecedented resource for aggregating and integrating data from a wide variety of astronomical catalogs, observation logs, image archives, and other resources for astronomers and the general public. Such collections have broad impact, are used by tens of thousands of individuals on a regular basis, and constitute critical and valuable community resources.

However, the collection, management, distribution, and preservation of such digital resources does not come without cost. Curation of digital data requires real support in the form of hardware infrastructure, software infrastructure, expertise, human infrastructure, and funding. In this article, we look beyond digital data to its supporting infrastructure, and provide a holistic view of the software, hardware, human infrastructure, and costs required to support modern data-oriented applications in research, education, and practice.

2. Digital Data Curation

Digital data curation focuses on the generation of descriptive metadata and validation of the quality of the data. Digital data preservation focuses on the characterization of the data authenticity (provenance information), and the management of data integrity across multiple generations of storage technology. An example of a curated community digital collection is the Protein Data Bank (PDB). The PDB is a global resource for structural information about proteins that is maintained by the Worldwide Protein Data Bank (wwPDB). This organization is composed of the Research Collaboratory for Structural Bioinformatics (RCSB), a consortium consisting of groups at UCSD/SDSC and Rutgers; the Macromolecular Structure Database (MSD) at the European Bioinformatics Institute (EBI) in Hinxton, UK; and PDBj in Osaka Japan.

When a user accesses a data portal for the PDB for information on HIV-1 protease, a target in the fight against AIDS, considerable infrastructure is provided to support this. Behind the scenes, the following components are involved in providing information on HIV-1 protease to the user:

The RCSB PDB database infrastructure at UCSD is coded in Java and completely built from public domain software. A MySQL database is used at SDSC to instantiate this schema, with middle and presentation layers built around Hibernate. Focus on software development includes systems to organize the material and provide services to support discovery, browsing, and presentation. The RCSB PDB infrastructure was developed to allow the collection to expand continuously and to allow ingestion and evaluation for new entries.

In short, to accommodate the request for information about HIV-1 protease from the PDB, substantial software, hardware, human, and funding support is required. At a recent AAAS Panel on data,4 RCSB PDB Director Helen Berman estimated that in 2005 more than one billion dollars of research funding were spent to generate the data that were collected, curated and distributed by the PDB.

3. Preserving Data over Time

Some digital collections will continue to be valuable resources for the foreseeable future. These typically include irreplaceable collections (e.g., the Shoah Collection of Holocaust survivor testimony),5 valuable community reference collections (e.g., PDB, NVO, PSID), and historically valuable collections such as federal digital records.6 7 For these digital collections, lifetime is measured in decades, with continuous active preservation, and often new material is added over time. Over a collection’s decades of existence, the media on which it is stored will go through tens of generations, standard encoding formats will evolve, preservation staff and institutions may change, etc. In short, everything involved with the collection may evolve, and evolution must be planned and executed in a way that maintains the integrity of the data collection and minimizes disruption to access from its user community.

Because the time periods over which long-term digital collections are preserved are measured in decades, the need for preservation environments is critical. At SDSC, some of the current data collections have been migrated over the last 20 years onto six generations of storage technology. Over that period, the trend in tape media costs per byte has been exponential, dropping by half approximately every three years. If this exponential trend continues, the total life-time cost of media is only twice the original media cost, being

(1 + 1/2 + 1/4 + …) * (original cost).

Of course, tape media are only a modest portion of the true cost of long-term storage and the labor for administering the storage system, in particular managing the transitions between generations of storage technology, must be incorporated into cost models (see below). Generally, the number of individuals managing the collections can stay constant, after the initial period of implementation, even though both the size of the data files and the size of the storage media are growing. This means that costs related to storage management labor are increasing slower than costs related to collection building and maintenance.

4. Data Management and Preservation for the Science and Engineering Community

Today’s large-scale computational runs often result in large-scale data output. It is not uncommon for a simulation to generate a million files and tens of terabytes of data with over 30 individuals collaborating on the application runs. This level of data output requires dedicated handling to move the data from the originating disk cache into a digital library for future access, with replication on an archival storage system.

SDSC’s digital data collections are representative of the state of the art. Digital collections developed for specific scientific disciplines typically have unique usage models but can share the same evolving data management infrastructure, with the difference between usage and storage models mainly tied to differences in management policies for sustainability and governance. Table 1 lists three categories of digital holdings at SDSC, loosely characterized as data grids (primarily created to support data sharing), digital libraries (created to formally publish the digital holdings), and persistent archives (focused on the management of technology evolution).

Data management requirements can be derived from Table 1. Today, it is not uncommon for a collection to contain 10 to 100 hundred terabytes of data, with two to 10 million files. In fact, collections are now assembled that have too many files to house in a single file system – containers are used to aggregate files into a larger package before storage, or files are distributed across multiple file systems. The number of individuals that collaborate on developing a shared collection can range from tens to hundreds. In Table 1, the column on the right labeled ACLs (Users with Access Controls) shows how many individuals (including staff) are typically involved in writing files, adding metadata, or changing the digital holdings in the collection. The number of individuals who access the collection can be much larger, as most of the collections are publicly accessible.

Date 5/17/02 6/30/04 1/3/06
Project GBs of data stored 1000’s of files GBs of data stored 1000’s of files Users with ACLs GBs of data stored 1000’s of files Users with ACLs
Data Grid
NSF / NVO 17,800 5,139 51,380 8,690 80 93,252 11,189 100
NSF / NPACI 1,972 1,083 17,578 4,694 380 34,452 7,235 380
Hayden 6,800 41 7,201 113 178 8,013 161 227
Pzone 438 31 812 47 49 19,674 10,627 68
NSF / LDAS-SALK 239 1 4,562 16 66 104,494 131 67
NSF / SLAC-JCSG 514 77 4,317 563 47 15,703 1,666 55
NSF / TeraGrid 80,354 685 2,962 195,012 4,071 3,267
NIH / BIRN 5,416 3,366 148 13,597 13,329 351
Digital Library
NSF / LTER 158 3 233 6 35 236 34 36
NSF / Portal 33 5 1,745 48 384 2,620 53 460
NIH / AfCS 27 4 462 49 21 733 94 21
NSF / SIO Explorer 19 1 1,734 601 27 2,452 1,068 27
NSF / SCEC 15,246 1,737 52 153,159 3,229 73
Persistent Archive
NARA 7 2 63 81 58 2,703 1,906 58
NSF / NSDL 2,785 20,054 119 5,205 50,586 136
UCSD Libraries 127 202 29 190 208 29
NHPRC / PAT 101 474 28
TOTAL 28 TB 6 mil 194 TB 40 mil 4,635 655 TB 106 mil 5,383
Table 1. Evolution of digital holdings at SDSC

For many digital holdings, the collection may be replicated among different storage systems and/or sites. The replication serves multiple purposes:

For many collections, data sources are inherently distributed. The National Virtual Observatory collection provides an example of this. Thus, a data management environment must provide the capabilities needed to manage data distributed over a wide-area-network. This requirement can be characterized as latency management and is typically achieved by minimizing the number of messages that are sent over wide-area-networks. Common mechanisms for latency management include

Many data collections at SDSC are managed on top of federated data grids. Having multiple independent data grids, each with a copy of the data and metadata (both descriptive attributes and state information generated by operations on the data), ensures that no single disaster can destroy the aggregated digital holdings. Federation allows the management of shared name spaces between the independent data grids, enabling the cross registration of files, metadata, user names, and storage resources. The types of federation environments range from peer-to-peer data grids, with only public information shared between data grids, to central archives that hold a copy of records from otherwise independent data grids, to worker data grids that receive their data from a master data grid.

5. Data Management System Components and Costs

The data management systems supporting digital collections support a variety of functions including sharing of data, publishing of data, preserving of data, and analyzing of data.

Data management system components may include the following:

The costs of such data management environments are driven by the need for integrity (eg., multiple replicas, validation of checksums over time, management of access controls), authenticity (management of provenance information to understand data context), scalability (management of the future number of files and amount of storage), and access (support for interactive versus batch access). By minimizing capabilities, the cost can be reduced. For a system that promotes the advancement of science, ability to support intensive analysis is key. For a system that ensures high reliability, the risk of data loss must be minimized.

Component costs of the data management system include the costs of installation, maintenance, and evolution of

6. Management

The long-term management of data requires a sustainability and governance model that specifies the policies that will be used to guarantee funding support, minimize risk of data loss, assure integrity, and assure authenticity.

The management plan needs to address plans for future access if the sustainability model fails, where the collection might be housed, and how the material will be migrated to the new environment. The concept of infrastructure independence in persistent archives can be extended to include independence from a particular sustainability model through federation with other institutions that use alternate sustainability models. Guaranteed access to a collection requires a community that is willing to curate the collection, identify risks to the maintenance of the collection, and seek opportunities to replicate the collection as widely as possible.

7. Conclusion

For science and engineering, as in life, there is “no free lunch.” The ability to organize, analyze, and utilize today’s deluge of data to drive research, education, and practice incurs costs for management, curation, preservation and distribution. These costs must be included in project budgeting and infrastructure planning, and are non-zero.

They are better than the alternative, however. Without responsible data planning as part of the process of project development, organization, and management, valuable data collections will be lost, damaged, or become unavailable. Lack of planning can incur substantive cost for resurrecting, re-generating, or rescuing a data collection, and without critical data, science and engineering advancement and discovery can be slowed. At the end of the day, the costs of thoughtful and strategic data management, curation and preservation are a bargain.

Acknowledgements
The authors would like to thank Helen Berman, Phil Bourne, and Richard Moore for their comments and improvements.
1 http://www.pdb.org/pdb/Welcome.do
2 http://psidonline.isr.umich.edu/
3 http://www.us-vo.org/
4 http://php.aaas.org/meetings/MPE_01.php?detail=1110
5 http://www.usc.edu/schools/college/vhi/
6 http://www.archives.gov/
7 http://www.loc.gov/index.html

URL to article: http://www.ctwatch.org/quarterly/articles/2006/05/designing-and-supporting-data-management-and-preservation-infrastructure/