The intent is that the pre-flight mission timeline occurs uninterrupted while this process is executed on the ground. The nominal damage assessment process is scripted and well-rehearsed as to fit in a nominally 24-hour timeline. This is absolutely essential to mission success. In this way any damage that may require repair is identified and reported to the Mission Management Team by no later than the fifth day of flight. It is at this point during the flight when the schedule for the remainder of the mission is finalized. In particular, if a repair must be executed, it must be identified at this point so that adequate resources (e.g., breathable oxygen, water, spacecraft power) can be allocated. Identifying a problem late in the mission may be useless as there may not be adequate resources available to affect a repair.
It is in this compressed timeline that high-fidelity analysis must be performed if it is to be of value to the overall process. Additionally, the data environment is highly dynamic, as new characterizations of a damage site are continuously acquired. The timeline is such that a high-fidelity analysis must have a turnaround time of approximately eight hours or less for it to be useful. This requirement poses a number of challenges.
The two primary CFD codes used by the reentry aerothermodynamic community at NASA are the LAURA and DPLR codes from Langley and Ames Research Centers, respectively. Both codes are block-structured, finite volume solvers that model the thermochemical nonequilibrium Navier-Stokes equations. LAURA 5 was originally written in Fortran 77 and was highly optimized for the vector supercomputers of the day. Subsequent modifications to the code have incorporated MPI for distributed-memory parallelism. DPLR 6, written in Fortran 90, is a relatively newer code and was designed from its inception to use MPI on distributed-memory architectures with cache-based commodity processors.
The Columbia supercomputer, installed and maintained by the NASA Advanced Supercomputing Division (NAS), is the primary resource used for these analyses. Columbia is composed of 20 SGI Altix nodes, each of which contains 512 Intel Itanium-2 processors. (Columbia was ranked 20th on the November 2007 Top 500 supercomputer ranking.) Prior to each launch, NAS personnel reserve one node for dedicated mission support and alert the user community that additional resources may be reallocated if necessary. Columbia is augmented with department-level cluster resources to provide redundancy (albeit at reduced capability) in case of emergency.
Institutional policies preclude major modification to either the software environment on the machines or the supporting network infrastructure in a “lockdown” period leading up to launch. This helps assure that resources are available and function as intended when called upon. This restriction prevents overzealous firewall modifications from precluding access to resources, to provide but one example.






