News and Announcements
Jack Dongarra a Foreign Member of the Royal Society
On April 16, 2019, Jack Dongarra was elected as a Foreign Member of the Royal Society (ForMemRS) for his contributions to mathematics and computational science and engineering. The Royal Society is the oldest scientific academy in continuous existence, dating back to 1663, and members include Isaac Newton, Charles Darwin, and Alan Turing—among many other distinguished scientists. Congratulations, Jack!
Exascale with Aurora?
Will the first name of the US exascale (machine) effort be “Aurora?” If the Department of Energy (DOE) has anything to say about it, then—yes—Aurora will be the first supercomputer in the United States to reach 1018 floating point operations per second. For a cool $500 million, this new machine will be installed at Argonne National Laboratory sometime in 2021.
It is very likely that other major players in HPC will also have their own exascale machines by 2021—especially considering China has accelerated (pun intended) its commitment to HPC and currently has 227 systems on the TOP500, comprising nearly half of the list. As of November 2018, the United States has only 109 systems on the TOP500.
In fact, and speaking of the TOP500, ICL’s Jack Dongarra is aware of three possible exascale systems that could be installed in China sometime in 2020.
With that in mind, the US DOE has allocated $1.8 billion for additional exascale machines that could be installed at Oak Ridge National Laboratory and Lawrence Livermore National Laboratory. What has at times seemed like a slow-motion race to exascale is now at our doorstep.
Click here to read more about Aurora and DOE plans for exascale (New York Times).
ICL Winter Reception
Cruising into 2019 with spring right around the corner, the ICL Winter Reception was once again held at Calhoun’s on the River. The reception offered a welcome opportunity for about 50 members of ICL and their loved ones to relax, eat, drink, and enjoy the camaraderie.
Research Spotlight Award from the Office of Research and Engagement

Research Spotlight Award Winners (Left to Right): Interim Chancellor Wayne Davis, Bill Fox, Jack Dongarra, Michael Mason, Kimberly Powell, Alex Bentley (accepting for Dawnie Steadman), Katherine Ambroziak (accepting for Phillip Enquist), Lou Gross, Daniel Feller, Suzie Allard, Sarah Colby
Jack Dongarra was presented with a Research Spotlight Award from the Office of Research and Engagement (ORE) on March 25, 2019. The ORE awards are a recognition of comprehensive research enterprises—including activities related to funding, mentorship, creative achievement, community engagement, and responsible conduct of research. Congratulations, Jack!
Conference Reports
VI-HPS Knoxville
On April 9–12, ICL hosted the Virtual Institute for High Productivity Supercomputing’s (VI-HPS’s) 31st tuning workshop. ICL is one of the original four founders of VI-HPS, and today the virtual institute combines the expertise of twelve partner institutions—each with experience in the development and application of HPC programming tools and projects that contribute leading-edge technology to the institute and the community at large.
A total of 22 participants took advantage of the tuning workshop, and many of them prepared their own MPI, OpenMP, or hybrid MPI + OpenMP parallel application codes for analysis. This workshop approach to analysis resulted in successful and high-quality engagement between participants and tools instructors during the four days of training.
The editor would like to thank Heike Jagode for her contribution to this article.
Santa Fe GPU Hackathon
Another season, another GPU Hackathon featuring Piotr Luszczek. Although it was nearly spring (merely a week away on March 11–15), it certainly looks like winter is coming, and the southern Rocky Mountains were hit with a major snow storm and 60 mph winds.
Around 50 people managed to make it to the Hackathon despite the weather, and the Sante Fe round featured a mix of applications leveraging OpenMP 4.5, OpenACC, and Kokkos. All applications were run on the Fluid Numerics Cloud cluster, which is an elastic HPC cluster powered by the Google Cloud Platform that features a variety of GPUs.
Specifically, Piotr’s goal for this round was to improve application integration with respect to the Extreme-scale Scientific Software Development Kit (xSDK).
The editor would like to thank Piotr Luszczek for his contributions to this article.
Recent Releases
HPCG 3.1 Released
The High Performance Conjugate Gradients (HPCG) 3.1 reference code release is now available. The HPCG benchmark is designed to measure performance that is representative of modern scientific applications. It does so by exercising the computational and communication patterns commonly found in real science and engineering codes, which are often based on sparse iterative solvers. HPCG exhibits the same irregular accesses to memory and fine-grain recursive computations that dominate large-scale scientific workloads used to simulate complex physical phenomena.
Improvements to the HPCG 3.1 code include:
- Switched the output format for reporting the results from YAML to a basic line-oriented, key-value format with nested naming scheme for the keys.
- Added faster search for optimal 3-D grid partitioning of a given integer that does not require combinatorial search through the all 3-set partitioning of the prime factors.
- Closed the outstanding bugs reported as issues on HPCG’s GitHub project page and incorporated the fixes in the source code.
Click here to download the tarball. Follow HPCG’s development on GitHub.
Interview

Dong Zhong
Where are you from, originally?
I was born in Shaanxi, Chinfa—one of the cradles of Chinese civilization. Thirteen feudal dynasties established their capitals in our province during a span of more than 1,100 years.
Can you summarize your educational background?
I earned my Bachelor’s degree from Toing University at ShangHai, where I majored in computer science. I earned my Master’s degree from ZheJiang University at HangZhou, where my research interest focused on micro-satellite operating system design and implementation, including the design of the operating system, the data structures, and the subsystem interfaces (API) used to control the satellite and communicate with subsystems. The subsystems included the orbit control system, attitude determination and control system, power supply, telemetry and control system, and GPS.
How did you first hear about ICL, and what made you want to work here?
I know some former students from ICL, and I really like the research topics and enjoy the working environment.
What is your focus here? What are you working on?
I am working in the DisCo group as part of the Open MPI project, and my research focuses on fault-tolerant distributed systems, including node/process failure detection and reliable message propagation. I am also involved in the Cross-layer Application-Aware Resilience at Extreme Scale (CAARES) project doing related work.
What would you consider your most valuable “lesson” you have learned so far at ICL?
It is very important to work in a community (such as the Open MPI community) and have good connections with people from industry and research labs.
What are your interests/hobbies outside of work?
I have a pair of lovebirds and several kinds of fish. Keeping them doesn’t require too much effort, but it brings me a lot of joy.
Tell us something about yourself that might surprise people.
I am good at breeding birds, and my bird has hatched three clutches so far. The breeding process is interesting, and the nursing part is challenging.
If you weren’t working at ICL, where would you like to be working and why?
I’d like to be a research assistant or software engineer working with satellite technology or related topics.


















































