News and Announcements
Article Highlights the Potential Impact of Exascale Computing
In an article he wrote for the public, Paul Messina, director of the US Department of Energy’s Exascale Computing Project (ECP), conveys the challenges and benefits of achieving a supercomputing speed of at least a billion billion operations a second.
The entirety of the quest to reach exascale, he explains, will involve not only ratcheting up the speed 50 to 100 fold over the fastest computers in broad use today but also addressing the associated data storage and analysis issues and enhancing the software environment.
Success in achieving exascale computing could have widespread effects on society, according to the article. It cites potential benefits in pollution reduction, materials science and the related creation of new technologies and inventions, alternative energy solutions, advances in healthcare, more accurate and timely weather prediction, and better urban planning to enhance the quality of life.
Opportunities for Students Abound at SC17
The immense value of firsthand encounters to personal growth is hard to quantify, but Einstein summed it up pretty well when he said, “The only source of knowledge is experience.”
Tapping into various experiences is exactly what students who attend the SC17 Supercomputing Conference in Denver, November 12–17, can do.
Created to build a strong and diverse student community in high performance computing, the conference’s Students@SC program features a student cluster competition, student volunteers program, HPC for undergraduates program, doctoral showcase, and technical program with student participation.
Students@SC also presents a job and opportunity fair, a mentor/protégé program, networking events, and full access to the conference.
What’s more, reduced conference registration fee, travel grants, housing, and partial meal coverage are available on a limited basis.
For more information and details on how to apply, visit the Students@SC website.
Conference Reports
The MPI Forum
Three working groups were particularly active during the MPI Forum meeting, February 27–March 2, at the Microsoft facility in downtown Portland, OR.
The MPI Forum is the standardization body that decides on improvements and evolutions of the Message Passing Interface specification for programming parallel computers.
The Error Management working group, led by ICL’s Aurelien Bouteiller, concentrated on the User Level Failure Mitigation chapter, the main proposal of which defines the behavior of MPI after crash failures. The group considered extensions to the proposal intended to ease the writing of application codes that switch between global and localized recovery patterns.
The forum’s Hybrid working group reviewed a proposal known as Endpoints. This proposal defines a mechanism to create multiple MPI ranks per MPI process, possibly increasing multithreaded performance and permitting threads—the smallest sequences of programmed instructions that can be managed by a scheduler—to participate in collective communication.
Feedback from MPI implementors suggests that the performance benefit of the Endpoints proposal would be limited, but it remains under consideration for its collective communication capabilities.
A third working group examined Sessions, an emerging proposal. Sessions would permit the creation of localized objects, or small communicators, without developing objects such as MPI_COMM_WORLD by interrogating the runtime of MPI for certain properties. The group’s discussion centered on the necessity for these queries to return consistent values in the MPI universe and interactions with the runtime, batch, and tools environment surrounding MPI.
The next meeting of the MPI Forum is in June 2017.
BDEC China
The Big Data and Extreme-Scale Computing (BDEC) workshop on March 9–10 in Wuxi, China, focused on the theme “Pathways to Convergence” between big data and exascale computing.
This workshop was the latest installment in a series of meetings in which eminent representatives of the scientific computing community are endeavoring to map out the ways in which big-data challenges are intersecting with, and affecting, ongoing national plans for achieving exascale computing. Jack Dongarra, Terry Moore, and Tracy Rafferty participated on behalf of ICL.
With the help of the leadership of the National Supercomputing Center in Wuxi, which hosted the meeting, the forty-four workshop participants included not only outstanding members of the scientific computing community from the United States, the European Union, and Japan but also five of their colleagues from China, the largest number from that country ever to attend a BDEC meeting.
The workshop had two main goals.
One was to update the group about the ongoing national and regional development of big-data and supercomputing infrastructure and applications, and to organize and focus the writing process that will produce the BDEC “Pathways to Convergence” report.
A set of keynote and plenary talks addressed the first objective by illuminating ongoing infrastructure efforts in different countries and describing emerging applications of machine learning and artificial intelligence in traditional HPC areas of inquiry, such as cancer research and meteorology.
The second goal was the target of the breakout sessions of three working groups, one for HPC and cloud infrastructure, one for “edge” computing environments—all the information and communication technology lying outside the centralized HPC/cloud system—and one for application paradigms that exploit one or both of those technological ecosystems.
The distinction between the two environments is based on the fact that, while the highest concentrations of computing and storage resources are in HPC/cloud, the vast majority of the rising flood of data is being generated in the edge.
The three breakout areas of BDEC China align with the key parts of the “Pathways” document that remain to be completed. Building on the results of the workshop, BDEC leadership plans to release the report late this spring.
TESSE Workgroup Meeting
When the workgroup of the Task-Based Environment for Scientific Simulation of Extreme Scale (TESSE) project met March 13–15 in Roanoke, VA, the target of interest was how to more tightly integrate the scientific software packages PaRSEC, MADNESS, and TiledArray.
The meeting enabled developers to design and experiment with the PaRSEC domain-specific language for computational chemistry. This computer language could expose all the available parallelism in the algorithm while maintaining consistency and compactness of representation.
The workgroup has diverged from the traditional C/Fortran programming language heavily toward C++-14 using what are called advanced templating and recursive-template metaprogramming.
Attending the meeting from ICL were George Bosilca, Damien Genet, Thomas Herault, and Jeffrey Steill.
Parallel 2017
ICL’s Hartwig Anzt gave a talk in German at Parallel 2017, the software conference for parallel programming, March 29–31, in Heidelberg, Germany.
Hartwig’s talk was on the important role that block-Jacobi preconditioners play in scientific computing. In mathematics, a preconditioner puts a problem into a form that is more suitable for numerical solving methods. Block-Jacobi preconditioners are especially useful on parallel computing hardware.
Recent Releases
Open MPI 2.1.0
Open MPI 2.1.0 has been released!
The Open MPI project is an open-source implementation of Message Passing Interface that is developed and maintained by a consortium of academic, research, and industry partners. MPI primarily addresses the message-passing parallel programming model, in which data is moved from the address space of one process to that of another through cooperative operations.
Open MPI integrates technologies and resources from several other projects (HARNESS/FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI) to build the best MPI library available. A completely new MPI-3.1-compliant implementation, Open MPI offers advantages for system and software vendors, application developers, and computer science researchers.
ICL’S efforts in the context of Open MPI have significantly improved its scalability, performance in manycore environments, and architecture-aware capabilities, such as adaptive shared memory behaviors and dynamic collective selection, making it ready for the next-generation exascale challenges.
The main focus of the Open MPI v2.1.0 release was to update to PMIx v1.2.1. When PMIx—e.g., via mpirun-based launches or via direct launches with recent versions of popular resource managers—is used, launch time scalability is improved, and the runtime memory footprint is greatly decreased for large numbers of MPI/OpenSHMEM processes.
Also among the major new features of Open MPI 2.1.0 are the following:
- It updates OpenSHMEM API conformance to v1.3.
- The usnic BTL now supports MPI_THREAD_MULTIPLE.
- It makes general, overall performance improvements to MPI_THREAD_MULTIPLE.
- It adds a summary message at the bottom of configure that tells you many of the configuration options specified and/or discovered by Open MPI.
As compared with prior versions, Open MPI 2.1.0 has no changes in behavior.
Interview

Jeffrey Steill
Where are you from originally?
I am a Midwestern boy with a propensity to roam. I was born in Lafayette, Indiana, and I grew up in that giant cornfield that stretches from Iowa through Illinois, Indiana, and Ohio. I moved to Knoxville to start a family and then we moved to the Netherlands and California before coming back.
What is your educational background?
I majored in changing majors at The Ohio State University before moving to Tennessee and completing my undergraduate degree in Chemistry. I very much enjoyed my experience at UT Knoxville and stayed here to earn a doctorate in chemical physics.
What are your research interests?
I am passionate about the physical sciences in general, and I have a particular fascination with the underlying physics governing chemical reactions. I intend to contribute to our understanding of energy, climate, and biology as well as influence the overall effectiveness of science in modern society. I see an enormous potential for impact in these fields through improved approaches to computing molecular properties.
What drew you to ICL?
In my career as an experimental research scientist it has become increasingly clear that the most effective experimental apparatus humans have yet created is the computer. I was fortunate enough to have a world-class research group at my doorstep here at UT Knoxville that is pushing the bounds of this technology that will inevitably come to dominate the physical sciences, if not every aspect of modern endeavor. I found the opportunities and ambition manifest in the group’s work to be challenging and inspiring.
What are you working on here?
I am working with the TESSE team implementing the PaRSEC task-based framework for computational chemistry, which has the potential to revolutionize multiple scientific fields due to the increased scope of the problem complexity that is tractable.
If you weren’t working at ICL, where would you like to be and why?
Well, I made a very deliberate choice to shift my career in order to take advantage of the incredible opportunity here at ICL, so that is a very tough question. I guess going on tour with Dylan or U2 would beat the pants off this, though.
What are your outside interests/hobbies?
I am definitely a boring family guy—I mostly just enjoy embarrassing my kids in front of their friends. In addition, I find real joy in music; I play guitar regularly and picked up a little mandolin living here in Bluegrass country. I try to get outside for a walk in the woods when I can, and I very much enjoy traveling.
Tell us something unique about yourself that might surprise some people.
I’ve published plenty of scientific papers but still don’t know anything about anything. Oh, and I’ve never been arrested.




















