CTWatch
May 2007
Socializing Cyberinfrastructure: Networking the Humanities, Arts, and Social Sciences
Introduction
David Theo Goldberg, Director - University of California Humanities Research Institute
Kevin D. Franklin, Executive Director - University of California Humanities Research Institute

The sciences have been transformed dramatically by the application of cyberinfrastructure to addressing pressing questions. The humanities, arts and social sciences, by contrast, took to institutional cyberinfrastructural development and application relatively late, largely with the advent of the Web. Early experimentation in these latter domains largely involved the use of existing tools and processes–search engines for information retrieval, text analytics for information extraction, and video game engines for interactive application with real-time graphics, and applied them to new domains.

These turned out to be crucially important experiments on at least two scores. They opened up possibilities for thinking about the technologies and their developments in new light, leading in some instances to genuinely new technological insights and developments. But they also led the way in opening up a new generation of researchers in humanities, arts and social sciences to thinking about novel tool development and new applications. And they raised important questions about how these new technologies challenged traditional ways human beings think of themselves as human. We have traditionally thought about being human in relation to the natural world. We are now challenged to think anew about what it is to be human and its limits in a world dramatically technologically fashioned and transformed.

Cyberinfrastructure has enabled us to know a vaster range about a domain, to know it in greater depth, and to do so far more quickly than manually. It has also increased considerably our access to more information about a vaster range of subject matters, and to do so more quickly. Increasingly, it has made possible the connecting of available information, relating data sets or more discrete pieces of information, and now also increasingly, meanings in potentially novel and potentially provocative ways. So it is not just that cyberinfrastructure allows humanists, arts and social scientists to compose new and large data sets, and to circulate them in more open ways. If that were all, we might still sing its praises, but this would hardly be revolutionary. What has been so important about the application of cyberinfrastructure, its properly transformative potential, has been its more relational and networking capacities, what Cathy Davidson in the opening article of this special issue of CTWatch Quarterly calls Web 2.0 HASS applications. Both Davidson, in the opening article of this special issue, and Beemer et al., a network of scholars at the University of California, in the second article, recount aspects of this early history in the emergence of HASS cyberinfrastructure.

There have been three principal areas of HASS focus over the past dozen years or so. The first, both temporally and receiving the greatest emphasis and application of resources, has been facilitating the development of discrete digital projects. These interests have overwhelmingly assumed the form of digital archiving and text-mining tools and applications, or in the case of the arts robotic experiments, in regards to artificial intelligence or the development of games. There has been a wealth of recent projects—three-dimensional visualization tools and geographic information system maps, live telepresence dance performances, persistent digital archives of historical and cultural materials, experimental digital reading and writing, especially poetry, and so on. The latter part of Beemer et al, and the articles by Patricia Seed, Ruzena Bajcsy et al, and George Lewis elaborate a range of compelling open source projects in the humanities, arts and social sciences under current development. But these are just a sampling of the vast range.

The second, more recent, contribution has been the design, development, and implementation of cyberinfrastructure such as computing grids running on highspeed fiber-optic networks, like LambdaRail and Internet2, for large scale interactivity across and access to searchable, if expanding, archives and data sets. The latter increasingly are including visual and auditory materials applications development. In their article, Larry Smarr et al spell out a large scale project, the CineGrid, for hosting and sharing what is projected to become a comprehensive data base of the history of cinema.

The third area concerns critical discussion of implications of the digital turn for thinking about the human and how we may map and know the human condition at specific times as well as their transformations over historical time. These ontological and epistemological issues have a long philosophical history, and these critical discussions indicate how the cyberinfrastructural revolution, while genuinely novel, even revolutionary, nevertheless raise questions about being and knowledge long facing the human condition. An increasing range of humanities centers, both digital humanities and more traditional centers, have been organizing forums, workshops and working groups to address these issues.

Finally, a general question of considerable importance today concerns how to facilitate the development of critical techno-humanists able to converse in both the technical languages of engineering and computer sciences, on one hand, and the human sciences (including the arts), on the other. An important first step, which a number of engaged researchers are already in the process of taking, is to encourage ongoing conversations and interactions arranged around hands-on projects in what we call the human sciences. Vernon Burton et al, in the closing article of this issue, addresses some of the benefits, resource requirements, and challenges in establishing a university center to promote such collaborations between human scientists and engineers, computer scientists, and technologists. These interactions would identify the technological needs, interests, and possibilities at the interface of the human sciences and the digital. Techno-humanists will be better positioned to address collaboratively the engineering and social-humanistic challenges raised by digitization and preservation (stewardship), by textual, visual, and audio search engines, by data mining and analysis, by shared data bases and their management. This special issue of CTWatch, while introducing some of the interesting work going on at the interface of the human and computational sciences, the arts and engineering, also takes another, less tentative step in the direction of encouraging the development of techno-humanists fluent in both the humanistic and technological vocabularies.

Cathy N. Davidson, Duke University

1

The first generation of the digital humanities was all about data. The excitement and impetus of digital humanities throughout much of the 1990s and continuing to the present was that massive data bases could be digitized, searched, and combined with other data bases for interoperable searches that yielded more complex and complete results in a shorter amount of time than the human mind had ever imagined possible.1 In this way, revolutions in digital humanities were similar to those in other fields. In biological science, sequencing the genome could never have happened without dramatic increases in computational power. In natural science, we know more than ever about global warming due to such projects as the Millennium Ecosystem Assessment (2005), which evaluates the global changes to 24 separate life support systems (including biodiversity, ecosystems, and the atmosphere).2 In social sciences, human complex systems theory combines results based on social network theory, demography, migratory patterns, social regulation and laws to analyze movements of persons and goods globally. And in the human sciences or humanities, myriad projects digitize the texts and artifacts of world culture, from the beginning to the present, in order to create new understandings of the history of ideas.

Second-generation digital humanities are the scholarly equivalent of what Tim O’Reilly has dubbed “Web 2.0.” If Web 1.0 was the World Wide Web’s collection of websites and data bases (what human scientists would call “archives”), Web 2.0 is a fully developed platform that serves a variety of applications to its end users.3 However, there is also an important difference between the business and humanistic history of cyberinfrastructure. O’Reilly’s term “Web 2.0” was coined to differentiate what was coming next from what didn’t work in the burst dot.com economy. By creating a “before” and “after,” the concept of Web 2.0 was designed to encourage a new generation of investors in internet technologies. But there is no equivalent “bad before” in digital humanities. Rather, the current generation of digital humanities extends and builds upon the foundation of Humanities 1.0.

The transformation of archives into interoperable and professionally-constructed digital databases has changed the research and pedagogical questions of our age, by providing the individual researcher almost instantaneous access to far more data than any one person could gather in a lifetime and by allowing more people access to these materials than ever before. Let me give an example of how transformative this has been for teaching and education in the human sciences. Back in the 1980s and 1990s, when I taught courses on mass education, reading, and writing during the highly contentious political period following the American Revolution, I used to have graduate students do archival research in early American newspapers and magazines, some of which were available on microfilm or microfiche, unindexed.4 A student might have spent a hundred hours rolling the films in the dizzying light of those unwieldy machines (I had one in my office and used to call it, without affection, The Green Monster). If the student found one good example, it was a successful project. Two examples constituted a triumph. In many cases, the search was so frustrating that the student might well have applied for a scholarship to travel to an archive in New England, such as the American Antiquarian Society, where the resources were far richer.

If I teach that course now, my students can go to searchable data bases of early American imprints, of eighteenth-century European imprints, of South American and (growing) African archives, and of archives in Asia as well. A contemporary student could, in far less time, not only use digitized and indexed archives to search U.S. data bases but could make comparisons across and among popular political movements world-wide, and possibly make arguments about the spread of dissent along with commodities such as tea, sugar, or rice. The barbarism and ubiquity of the slave trade as part of the spread of global systems of capital also meant for an exchange of ideas about personhood, statehood, individual rights, and human rights.

Pages: 1 2 3

Suzy Beemer, University of California Humanities Research Institute
Richard Marciano, San Diego Supercomputer Center, UC San Diego
Todd Presner, UCLA

1

The digital revolution in the humanities, arts, and social sciences (HASS) is most definitely here. It has been slow and difficult in coming, for multiple, complicated reasons. Cyberinfrastructure (CI) is just starting to be explored in academic fields outside of high-performance computing. HASS scholars are finding ways to overcome impediments to their participation in CI, and are producing new knowledge by using advanced technology to pursue their research.1 This article will first lay out some of the obstacles these disciplines face in using technology, and will then profile two projects in the University of California system that exemplify what computational analysis can bring to HASS fields.

Obstacles

Compared to scientific disciplines, HASS have historically been “low tech” in their methodologies. So it can seem surprising that their technological needs for research, now that they are turning in this direction, pose significant difficulties for programmers. Data sets, if indeed the data exist as “sets” at all, are disparate, fragmented, and may actually be larger than those in the scientific fields. Structuring data for access and display is extremely complex. Technological difficulties abound: comparative analysis with respect to geographical location and temporal, societal, linguistic, and cultural aspects requires capabilities for creating hierarchically structured data with various levels of abstraction. The challenge lies in determining how best to aggregate, organize, and display the data and in developing the most beneficial tools for analysis. The final report of the ACLS Commission on Cyberinfrastructure and the Humanities delineates many of the deterrents that HASS needs present to efficacious CI:

Digitizing the products of human culture and society poses intrinsic problems of complexity and scale. [This cultural record is] multilingual, historically specific, geographically dispersed, and often highly ambiguous in meaning. . . . [A] critical mass of information is often necessary for understanding both the content and the specifics of an artifact or event, and this may include large collections of multimedia content. . . . [HASS] scholars are often concerned with how meaning is created, communicated, manipulated, and perceived [which further complicates programming for access and display]. Recent trends in scholarship have broadened the sense of what falls within a given academic discipline: for example, scholars who in the past might have worked only with texts now turn to architecture and urban planning, art, music, video games, film and television, fashion illustrations, billboards, dance videos, graffiti, and blogs.2

An equally important if not greater barrier to HASS participation in CI is the requirement for enormous cultural change. The first thing that may come to mind is faculty resistance. Most faculty members welcome the increased access to objects of study that digital technologies have made possible. But some, especially in the humanities, have not believed that advanced technology can make a real contribution to their fields, or they may fear an overemphasis on technology, i.e., that it is largely whistles and bells, ultimately distracting scholars and students from careful interpretive study requiring close attention to text, context, artifact, human communication/ interaction/ performance, and other objects of study. Resistance, however, is not the most significant impediment to the use of advanced technologies in these disciplines - plenty of HASS researchers are tech-savvy and excited about new tools for their research and teaching. The far more pressing obstacles concern data, collaboration, and funding.

Pages: 1 2 3 4 5 6 7

Patricia Seed, University of California-Irvine

1

Flat maps conclusively proved their worth to the 3D world more than 500 years ago. In 1569 a mapmaker named Gerhard Mercator created a plane surface allowing a navigator to plot an ocean going voyage as a straight line. At a time when sailors spent months onboard ships, the ability to plan (and record) a journey on a sheet of paper or vellum proved invaluable. Nor has the 2D map become outmoded by newer means of travel. In the twentieth century, airplane pilots discovered that another, slightly differently designed flat map permitted them to find the shortest flying distance between two points.

The 2D map has thus allowed humans to journey where no one had ventured before. Magellan sailed around the world guided by nearly half a dozen skilled mapmakers, and James Cook journeyed through the South Pacific with the aid of equally flat navigational charts. Eventually, the ability to travel led to regularly scheduled transportation and from that to the growth of global commerce and communication. No shortage of good reasons exists to celebrate the human achievements that flat maps made possible.

But when trying to communicate knowledge of the past, to grasp actions that occurred long ago and far away, two-dimensional surfaces only limit the degree of comprehension. For example, the terrain has historically presented obstacles for finding the best places for concealment and observation, and the preferred avenues of approach during military skirmishes. Looking at a flat map with small circles designating locations of attacks leaves us unable to fully grasp the role of the large silent actor in many campaigns, the countryside. Lacking three-dimensional representation of the terrain, a full-fledged appreciation of the difficulties presented in conflicts remains out of reach.

But such limitations are not restricted to portrayals of combat. When seeking to understand how people migrated throughout the world, flat maps leave out the reasons for the paths they took. When inhabitants of Tibet migrated from plateaus 14,000 feet high, they swerved to avoid even higher mountains and, upon reaching the lowlands of tropical Burma, sidestepped river gorges 10,000 feet deep.1 Flat maps fail to do justice to either the difficulties of descending the Himalayas or the hazards of the paths migrants took in northern Burma. With only a flat map, it remains hard to understand the level of courage they displayed and degree of difficulty these migrants endured.

Despite their considerable limitations, two-dimensional maps predominate today in the study of the past; depicting the historical movement of peoples, the paths of long-forgotten sailing vessels, and the position of opposing forces in battle centuries ago on flat surfaces. Yet the capacity to change this approach already exists.

Existing technologies could create many of these three-dimensional images for historical images. Digital elevation models (showing the height of landscape features) are available freely for nearly every corner of the earth. Including this information, even on a flat map, would lead to a greater appreciation of the physical obstacles to survival that humans have faced in times of both peace and war–the unexpected mountains and gorges that thwarted their search for better places to live or to hide.

Pages: 1 2 3 4

George E. Lewis, Columbia University

1

In recent years, the computer has assumed a central role in artistic practice. Digital technology now serves as a critical site for interdisciplinary exploration, encouraging the blurring of boundaries between art forms. Increasingly, new imaginings of history, culture, and human practice are finding the computer at their center. In turn, these new imaginings are being driven by the advent of new and more powerful forms of computer interactivity that challenge traditional conceptions of human identity and physicality.

At the uninterrogated core of common notions of interactivity, as it is practiced in the digital domain, we find the primordial human practice of improvisation. Perhaps the most important unacknowledged lesson the interactive digital arts have taught us concerns the centrality of both improvisation and interactivity to the practice of everyday life. What is more difficult to divine for these arts, as well as the theorizations that attend them, is the nature of the relationship between the two concepts.

To begin, we must interrogate the theoretical and historical discourses that mediate our encounters with computers, including those that condition our cultural understanding of both improvisation and interactivity. Increasingly, theorizing the nature of these two practices is becoming an interdisciplinary affair, centering on how meaning is exchanged in real-time interaction. Such studies, combining the insights of artists, cultural theorists and technologists, will be crucial to the development of new conceptions of digitally driven interactivity.

Canonical new media histories tend to date the advent of interactivity in artmaking to the mid-1980s.1 However, anyone who remembers the period when “multimedia” did not refer to computers may find ironic the historical lacuna separating the notion of interactivity now on offer from the practices that arose in the computer music communities beginning in the early 1970s. This early period produced a number of “interactive” or “computer-driven” works, representing a great diversity of approaches to the question of what interaction was and how it affected viewers, listeners, and audiences.

Flashback

By the early 1960s, magnetic tape-based music composition was known to offer possibilities for precise control of time and sound, but was also criticized as insensitive to real-time nuances of human expressivity. To many, making electronic music live, in real-time in front of audiences, would revitalize the paradigm of the composer-performer, long abandoned in the West. However, improvisation, a primary practice of the European composer-performer since antiquity, had been unceremoniously dumped from Western music’s arsenal of practice by the late 19th Century.

The recrudescence of real-time music making in the American classical music of the 1950s not only explored chance as a component of composition, but also rekindled aesthetic contention around the nature, purpose, structure, and moral propriety of improvised forms and practices. According to cultural historian Daniel Belgrad, these debates were part of an emerging “culture of spontaneity” that crucially informed the most radical American artistic experimentation in the mid-20th century, from the Beats and Abstract Expressionism to the transgressive new music of Charlie Parker, Thelonious Monk, and the musical New York School of John Cage, David Tudor, Morton Feldman, Earle Brown, and Christian Wolff.2

By May of 1968, “freedom” was on both the political and the musical agenda in Europe and the United States. Improvisation was widely viewed as symbolic of a dynamic new approach to social order that would employ spontaneity to unlock the potential of individuals, and to combat oppression by hegemonic political and cultural systems. The rise of “free jazz” in the United States was widely connected, both in Europe and the United States, with challenges to racism and the social and economic order, generally.

Pages: 1 2 3 4 5 6

Larry Smarr, Calit2; University of California, San Diego
Laurin Herr, Pacific Interface, Inc.
Tom DeFanti, Calit2; University of Illinois at Chicago
Naohisa Ohta, Keio University
Peter Otto, University of California, San Diego

1

The shared Internet is being used more and more for transmission of digital video data sets. YouTube is distributing 100,000,000 videos every day.1 However, these digital video streams are engineered to be easily transportable over the shared Internet to home users with megabit/sec rates. In contrast, cinema today is still largely shot and distributed in the century-old silver halide medium of film.2 A major barrier holding back the transition of theatrical film to digital distribution is that to preserve the extreme photographic resolution of motion pictures as seen in theatres requires playback bandwidth of a quarter gigabit/sec or more (depending on compression). Cinema is the next step in the ongoing digital conversion of modern media, and it will require building new image-centric hardware/software/networks to facilitate full-resolution human visual and auditory acuity, for production, distribution and, never possible with film, real-time global collaboration.

In addition to its essential role in the creative arts, the film industry has a large economic footprint. For instance, in 2005, the Motion Picture Association of America estimated that movie production provided employment for over 245,000 Californians, with an associated payroll of more than $17 billion.3

Faced with the historic transformation of the global theatrical film business from analog to digital technology, the major Hollywood studios formed the Digital Cinema Initiatives consortium4 to define the technical standards for digital cinema. The two resolutions that have emerged are termed 2K (2048x1080), roughly comparable to the high end of HD, and 4K (4096x2160), with four times the resolution of 2K or high end HD (Fig. 1) and 24 times that of a standard broadcast TV signal. Both standards have 12-bits/color. The digital cinema frame rate is the same 24fps used in analog cinema for DCI-compliant 2K/4K monoscopic “flat” movies, plus a new DCI frame rate of 48 fps (2 x 24fps) for 2K stereoscopic 3D movies.

The uncompressed bandwidth of the 4K format streaming in realtime is ~7.6 gigabits/sec, with each 4K frame having 8.8 megapixels, over twice what is available on the highest end PC monitors and eight times what the normal user has today on their laptop. When compressed using DCI-recommended JPEG 2000 distribution specifications, 4K bit rates to the neighborhood theatre are capped at 250 megabits/sec.

As such, we realized several years ago that 4K digital motion pictures would be one of the most demanding data types for emerging cyberinfrastructure. Since the NSF-funded OptIPuter project5 was well underway, the principle investigators of the OptIPuter at Calit2 and UIC’s Electronic Visualization Laboratory (EVL) initiated a project termed “CineGrid” to apply OptIPuter architectures to the needs of digital media professionals. Our initial collaborative partners were the Research Institute for Digital Media and Content, Keio University (Keio/DMC), the University of Southern California School of Cinematic Arts (USC/SCA) and Pacific Interface, Inc. The complex CineGrid project is overseen by Pacific Interface, which first proposed the CineGrid concept to Calit2/EVL and has subsequently incorporated CineGrid as a new nonprofit membership organization to organize a rapidly growing international research agenda ramping up in the US, Japan, Canada, and Europe.

Pages: 1 2 3 4 5

Ruzena Bajcsy University of California, Berkeley
Klara Nahrstedt, University of Illinois, Urbana-Champaign
Lisa Wymore, University of California, Berkeley
Katherine Mezur, Mills College

1
1. Introduction

During the last 10 years, several disparate technologies and science, notably in computer vision, computer graphics, distributed computing, and broadband networks have come together and facilitated geographically distributed tele-immersive environments in which people can interact and communicate.

We have built such 3D tele-immersive (TI) environments at the University of Illinois, Urbana-Champaign (UIUC) and University of California, Berkeley (UCB).1 The TI environments deploy large-scale 3D camera networks that capture, digitize, and reconstruct three dimensional images and sounds of moving people, as well as integrate and render the multimedia data from geographically distributed sites into a joint virtual space at each TI site. The reasons for deploying 3D TI environments, instead of just as an available video conferencing technology, are as follows: (1) the video conferencing displays only individual 2D video streams/pictures (though next to each other) on the screen, hence no joint virtual space can be created and no real interaction is feasible; (2) the 3D environment facilitates different views of interaction from any viewpoint, as the viewer desires, and with different digital options, such as different scales of participants and different people/scene orientations that create physically impossible views; and (3) with the 3D TI environment, one can easily integrate synthetic objects and other environments into the current immersive environment.

While there have been other demonstration projects of this nature (e.g., 2, 3, 4, 5, 6) they were restricted to the video sequences and special dedicated network connectivity. We are aiming to create 3D TI environments from COTS components for a common user who does not have the luxury of expensive supercomputing facilities, special purpose camera hardware and dedicated networks.

Yet we see the opportunity to use this kind of technology for exploring geographically distributed interaction and communication of people in which the physical/body interaction is important. We have explored this interaction in the domain of dance, since dance is a way of physical communication. However, we also see many other applications such as remote physiotherapy, collaboration between distributed scientists working on common multidimensional data sets, design of artifacts (architecture, mechanical, chemical and electrical designs), planning for coordinated activities and their like.

In this article, we will describe the individual components comprising the UIUC and UCB TI environments, as well as the challenges and few solutions that allowed us to execute one of the very first public performances in geographically distributed collaborative dancing that took place in December 2006. We will briefly describe the experiment as well as lessons learned from this exciting and very successful performance.

Pages: 1 2 3 4 5

Vernon Burton, Illinois Center for Computing in Humanities, Arts, and Social Science; NCSA; University of Illinois at Urbana-Champaign
Simon J. Appleford, Illinois Center for Computing in Humanities, Arts, and Social Science; University of Illinois at Urbana-Champaign
James Onderdonk, Illinois Center for Computing in Humanities, Arts, and Social Science; University of Illinois at Urbana-Champaign

1

The paradigm by which humanities and social science scholars conduct their research is rapidly changing. In a remarkably short period of time, digital technology has become essential to our intellectual processes, every bit as important as writing, and if humanists and social scientists do not embrace and study its potential they will not have access to a complete scholarly and pedagogical arsenal. This marvelously protean technology, which holds the potential to revolutionize teaching, outreach, and research across the humanities, arts, and social science disciplines, must be made available in its fullest for discovering, synthesizing, and sharing knowledge. In recognition of both this potential and of the challenges inherent in addressing these growing technological needs, there is a vital need for the development of a national cyberinfrastructure for humanities and social sciences. Indeed, this was the key recommendation of the American Council of Learned Society in its 2006 report Our Cultural Commonwealth, which urged universities, funding agencies, and the federal government to invest in such a cyberinfrastructure “as a matter of strategic priority.”1

The implications of this recommendation are startling, especially as the humanities, arts, and, to a somewhat lesser extent, the social sciences are notoriously under-funded, with many departments within these disciplines coming under increasing financial pressures. The resources required to implement even a basic level of support for digital humanities and social science scholarship—demanding extensive hardware and software purchases, as well as the acquisition of significant technical expertise as a basic requirement—is clearly beyond the means of most campus units; indeed, even if it were feasible from a budgetary standpoint, this approach would only result in the unnecessary duplication of resources. An effective cyberinfrastructure can, in truth, only be created through the establishment of a series of national centers at the university level dedicated to defining, implementing, and leading digital humanities, arts, and social science research needs across discipline- and unit-based boundaries, while simultaneously participating in a healthy dialogue that contributes to and exploits cyberinfrastructure. These centers will furthermore act as hubs that can provide stability within the community and allow long-term relationships to be forged between them and scholars at institutions that lack the resources to establish digital humanities centers. This article seeks to outline some of the benefits and challenges inherent in attempting to implement this agenda, as experienced by one recently founded center for digital humanities, arts, and social science research.

The Illinois Center for Computing in Humanities, Arts, and Social Science (I-CHASS) at the University of Illinois at Urbana-Champaign was founded in 2005 to serve the national research and education community, making resources and tools for high-end computing, data collection and analysis, geospatial inquiry, visualization, communication, and collaboration available to scholars. I-CHASS is envisioned as a nexus of scholarship, creativity, collaboration, outreach, and technical expertise—one hub among others in the growth of a vibrant community that spans both national and international collaborations and encompasses the humanities, arts, social sciences, and technology.

Pages: 1 2 3 4

Reference this article
Burton, V., Appleford, S. J., Onderdonk, J. "A Question of Centers: One Approach to Establishing a Cyberinfrastructure for the Humanities, Arts, and Social Sciences," CTWatch Quarterly, Volume 3, Number 2, May 2007. http://www.ctwatch.org/quarterly/articles/2007/05/a-question-of-centers/

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation (NSF).

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in articles published by the Quarterly or appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.