Some of the ongoing work includes advances in component capabilities for massively parallel and heterogeneous architectures, runtime enforcement of behavioral semantics, additional expressibility for complex interactions between components, and parallel coupling. We provide brief highlights below; see [3, 14] for additional details.
Emerging HPC Environments. Scientists developing petascale computational science capabilities continue to face major challenges in effectively using emerging high-performance computing (HPC) architectures, which are characterized by large processor counts and increasing use of heterogeneous, specialized environments. We are thus developing new tools for CCA users to simplify and accelerate the development of true petascale applications on diverse hardware platforms. Our goals are that CCA users will be able to flexibly and dynamically express higher levels of parallelism,15 transparently exploit specialized coprocessing resources, and support intelligent application-level responses to the hardware failures that are inevitable on systems of this scale. For example, we are working with a bioinformatics/proteomics application team to analyze data generated by mass spectrometers at PNNL.16
Software Quality and Verification. To help make the vision of interchangeable components a reality for scientific software, we are developing capabilities for the composition- and execution-time verification of interface semantics.17 18 Component interfaces, expressed separately from implementations, can be extended with semantic information to provide concise specifications that are both human-readable and interpreted by software. Unlike traditional verification techniques based either on post-execution comparisons with prior or analytical results or on algorithm-based fault tolerance techniques, this approach enables error detection closer to the point of failure. The result is improved testing, debugging, and runtime monitoring of software quality, thereby providing software developers with a powerful tool for catching errors early and ensuring correct software usage.
Computational Quality of Service. As computational science progresses toward ever more realistic multi-physics applications, no single research group can effectively select or tune all components of a given application, and no solution strategy can seamlessly span the entire spectrum efficiently. Common component interfaces enable easy access to suites of independently developed algorithms and implementations. The challenge then becomes how, during runtime, to make the best choices for reliability, accuracy, and performance. As motivated by simulations in combustion,10 quantum chemistry,19 fusion,13 and accelerators,20 TASCS researchers are addressing this challenge by developing tools for Computational Quality of Service (CQoS), or the automatic selection and configuration of components to suit a particular computational purpose and environment.21 The two main facets of CQoS tools are (1) measurement and analysis infrastructure and (2) control infrastructure for dynamic component replacement and domain-specific decision making.22
Parallel Data Redistribution and Parallel Remote Method Invocation. Parallel components raise questions about the semantics of method invocations and the mechanics of parallel data redistribution involving these components. Method invocations between parallel components are an opportunity to automate the data redistribution and translation semantics of the interaction between those components. The so-called MxN problem (where M processors associated with one component coordinate with N processors associated with another) arises often when multiple simulation components are joined in a single application. This allows an application to utilize a combination of task-based parallelism and domain decomposition to achieve integration, regardless of the scaling characteristics and resource constraints of the individual components. Support for this capability is being integrated into the Babel compiler.
The Common Component Architecture is a solid foundation for developing modular, maintainable high-performance simulations. Through support of the TASCS Center and collaborators, the surrounding ecosystem continues to flourish, and provides new functionality for taming the complexity of multi-physics, multi-scale, scalable applications.
This work was supported by the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, through the Office of Advanced Scientific Computing Research, Office of Science. The CCA Forum is a community involving participants for numerous DOE national laboratories, Universities, Companies, and other organizations.
2 Allan, B. A., Armstrong, R., Bernholdt, D. E., Bertrand, F., Chiu, K., Dahlgren, T. L., Damevski, K., Elwasif, W. R., Epperly, T. G. W., Govindaraju, M., Katz, D. S., Kohl, J. A., Krishnan, M., Kumfert, G., Larson, J. W., Lefantzi, S., Lewis, M. J., Malony, A. D., McInnes, L. C., Nieplocha, J., Norris, B., Parker, S. G., Ray, J., Shende, S., Windus, T. L., Zhou, S. “A Component Architecture for High-Performance Scientific Computing,” Intl. J. High-Perf. Computing Appl., 2006, pp. 163-202.
3 CCA Forum - cca-forum.org
4 Cactus - www.cactuscode.org
5 Earth System Modeling Framework - www.esmf.ucar.edu/
6 Collins, N., Theurich, G., DeLuca, C., Suarez, M., Trayanov, A., Balaji, V., Li, P., Yang, W., Hill, C., da Silva, A. “Design and Implementation of Components in the Earth System Modeling Framework,” Intl. J. High-Perf. Computing Appl., 2005, pp. 341-350.
7 CCA Specification - www.cca-forum.org/wiki/tiki-index.php?page=CCA+Specification
8 Babel - www.llnl.gov/CASC/components/babel.html
9 CCaffeine - www.cca-forum.org/ccafe/
10 Najm, H. (PI), Computational Facility for Reacting Flow Science (CFRFS) - cfrfs.ca.sandia.gov/.
11 SWIM - cswim.org
12 CPES - www.cims.nyu.edu/cpes/
13 FACETS - www.facetsproject.org/facets
14 McInnes, L., Dahlgren, T., Nieplocha, J., Bernholdt, D., Allan, B., Armstrong, R., Chavarria, D., Elwasif, W., Gorton, I., Kenny, J., Krishan, M., Malony, A., Norris, B., Ray, J., Shende, S. “Research Initiatives for Plug-and-Play Scientific Computing,” Journal of Physics: Conference Series 78 (2007), (available via www.iop.org/EJ/abstract/1742-6596/78/1/012046).
15 Krishnan, M., Alexeev, Y., Windus, T., Nieplocha, J. “Multilevel Parallelism in Computational Chemistry using Common Component Architecture and Global Arrays,” Proceedings of SuperComputing, 2005.
16 High-Performance Mass Spectrometry Facility, Pacific Northwest National Laboratory - www.emsl.pnl.gov/capabs/hpmsf.shtml
17 Dahlgren, T., Devanbu, P. “Improving Scientific Software Component Quality Through Assertions,” Proc. 2nd Int. Workshop on Software Engineering for High Performance Computing System Applications, 2005, pp. 73-77, (available via csdl.ics.hawaii.edu/se-hpcs/papers/sehpcs-proceedings.pdf).
18 Dahlgren, T. “Performance-Driven Interface Contract Enforcement for Scientific Components,” Proc. 10th Int. Symp. on Component-Based Software Engineering, LNCS 4608, Springer-Verlag, 2007, pp. 157-172.
19 Gordon, M. (PI), Chemistry Framework using the CCA - www.scidac.gov/matchem/better.html
20 Spentzouris, P. (PI), SciDAC Community Petascale Project for Accelerator Science and Simulation (COMPASS) - compass.fnal.gov
21 Norris, B., Ray, J., Armstrong, R., McInnes, L.,Bernholdt, D., Elwasif, W., Malony, A., Shende, S. “Computational Quality of Service for Scientific Components,” Proc. Int. Symp. on Component-Based Software Engineering, 2004, Edinburgh, Scotland (available via info.mcs.anl.gov/pub/tech_reports/reports/P1131.pdf).
22 McInnes, L., Ray, J., Armstrong, R., Dahlgren, T., Malony, A., Norris, B., Shende, S., Kenny, J., and Steensland, J. “Computational Quality of Service for Scientific CCA Applications: Composition, Substitution, and Reconfiguration,” Argonne National Laboratory preprint ANL/MCS-P1326-0206, 2006, (available via info.mcs.anl.gov/pub/tech_reports/reports/P1326.pdf).






