News and Announcements
HPCWire Interviews Jack Dongarra
From HPCWire:
HPCwire’s Managing Editor sits down with Jack Dongarra, Top500 co-founder and Distinguished Professor at the University of Tennessee, during SC21 in St. Louis to discuss the latest Top500 list, the outlook for global exascale computing and what exactly is going on in that Viking helmet photo. Plus what’s in store for 2022.
TOP500 Update
Unveiled at this year’s ISC High Performance Computing (ISC-HPC) conference on November 16, 2021, the 58th annual edition of the TOP500 saw little change in the Top10. Fugaku continues to hold the No. 1 position that it first earned in June 2020. Its HPL benchmark score is 442 Pflop/s, which exceeded the performance of Summit at No. 2 by 3x. Installed at the Riken Center for Computational Science (R-CCS) in Kobe, Japan, it was co-developed by Riken and Fujitsu and is based on Fujitsu’s custom ARM A64FX processor. Fugaku also uses Fujitsu’s Tofu D interconnect to transfer data between nodes.
The Microsoft Azure system called Voyager-EUS2 was the only machine to shake up the top spots, claiming No. 10. Based on an AMD EPYC processor with 48 cores and 2.45GHz working together with an NVIDIA A100 GPU and 80 GB of memory, Voyager-EUS2 also utilizes a Mellanox HDR Infiniband for data transfer.
Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, remains the fastest system in the U.S. and at the No. 2 spot worldwide. It has a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each housing two Power9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs, each with 80 streaming multiprocessors (S.M.). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA, is at No. 3. Sunway TaihuLight is listed at No. 4. Perlmutter at No. 5 was newly listed in the TOP10 in last June. Selene, now at No. 6, is an NVIDIA DGX A100 SuperPOD installed in-house at NVIDIA in the USA. Tianhe-2A (Milky Way-2A) is now listed as the No. 7 system. A system called “JUWELS Booster Module” is No. 8. HPC5 at No. 9 is a PowerEdge system built by Dell and installed by the Italian company Eni S.p.A.
Click here to see how the rest of the TOP500 panned out.
| Rank | System | Cores | Rmax (TFLOP/s) | Rpeak (TFLOP/s) | Power (kW) |
|---|---|---|---|---|---|
| 1 | Supercomputer Fugaku – Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D, Fujitsu, RIKEN Center for Computational Science, Japan |
7,630,848 | 442,010.0 | 537,212.0 | 29,899 |
| 2 | Summit – IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband, IBM, DOE/SC/Oak Ridge National Laboratory, United States |
2,414,592 | 148,600.0 | 200,794.9 | 10,096 |
| 3 | Sierra – IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband, IBM/NVIDIA/Mellanox, DOE/NNSA/LLNL, United States |
1,572,480 | 94,640.0 | 125,712.0 | 7,438 |
| 4 | Sunway TaihuLight – Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway, NRCPC, National Supercomputing Center in Wuxi, China |
10,649,600 | 93,014.6 | 125,435.9 | 15,371 |
| 5 | Perlmutter – HPE Cray EX235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB, Slingshot-10, HPE DOE/SC/LBNL/NERSC United States |
761,856 | 70,870.0 | 93,750.0 | 2,589 |
Intel OneAPI Center of Excellence
In October, the University of Tennessee, Knoxville, announced the creation of a new Intel oneAPI Center of Excellence to provide solutions in high performance computing (HPC) and visualization using oneAPI. The center will focus on two projects: porting the open-source HPC Ginkgo library to oneAPI for cross-architecture support and expanding its Intel Graphics and Visualization Institute of XeLLENCE to enable high-end visualization as a service through oneAPI.
ICL consultant Hartwig Anzt, heads a project focused on porting the Ginkgo library to oneAPI. This work extends Ginkgo library support to other accelerators—including taking advantage of current and future Intel® Xe GPUs. Anzt will lead the oneAPI Center of Excellence with a team of experienced sparse linear algebra researchers. The center will conduct research and use oneAPI technology to contribute to the open specification and advance ecosystem adoption. The center’s work will help prepare Ginkgo for the Aurora supercomputer at Argonne National Laboratory.
Additionally, Anzt participates on the oneAPI Math Kernel Library (oneMKL) Technical Advisory Board to provide Sparse Linear Algebra expertise and feedback on the oneAPI libraries, compilers, and tools to help support researchers and developers around the world to take advantage of the resulting improvements. Further leveraging the university’s expertise, oneAPI curriculum modules will be developed and included in UT’s coursework and open-sourced to bring oneAPI programming skills to students worldwide in 2022.
Conference Reports
SC21 Returns to In-Person
This year saw the first Hybrid experience for the International Conference for High Performance Computing Networking, Storage, and Analysis (SC21), held in St. Louis, MO on November 14 – 19. After a two-year hiatus from physical attendance the conference hosted more than 3,200 in-person attendees and 160 exhibitors with another 3,350 virtual attendees and 40 virtual exhibitors. Vaccine verification, face masks, and social distancing helped to ensure the safety of attendees and the mix of in-person and virtual events provided an environment that was every bit as creative, inspiring, and meaningful as past SC conferences.
The University of Tennessee groups represented this year were ICL, GCL (Global Computing Laboratory), and the UT Chattanooga Sim Center. Even with the Hybrid nature of the event, ICL was well represented in-person with faculty, research staff, and students giving booth talks, presenting papers and workshops, and leading “Birds of a Feather” sessions.
The dedicated ICL@SC webpage was once again running for the event. There, interested parties could keep tabs on ICL-related events during the conference. The page hosted a list of attendees, detailed schedule of talks, and the latest project handouts.
With the return to in-person attendance, members of ICL and other related attendees were able to meet up for dinner in St. Louis. The group included current and alumni ICL employees, as well as current GCL members, with roughly twenty-five attendees. Held at the Hilton 360 rooftop on Wed, Nov 17, the dinner provided an atmosphere of camaraderie and cooperation. ICL alumni each took a moment to re-introduce themselves and discuss their current endeavors. It was a great opportunity for attendees to talk about future collaborations and to maintain relationships with those who are now working in the industry outside of ICL.
Interview

Deborah Penchoff
Where are you from?
I was born in Buenos Aires, Argentina.
Can you summarize your educational background?
My training has been very interdisciplinary. At the beginning, my training involved mathematics, computer science, and chemistry. In graduate school, this evolved into applications of HPC and data science to challenges in nuclear- and radio-chemistry (which was largely facilitated by IGMCS). During my postdoc and early scientific career, my training in computer science and data science became more targeted towards needs in national and nuclear security.
Where did you work before joining ICL?
Before joining ICL, I worked at the Institute for Nuclear Security and the Howard H. Baker Jr. Center for Public Policy. Prior to this, I worked at the UTK Radiochemistry Center of Excellence. While I was working on my PhD and shortly after, I also worked in the ORNL Computer Science and Mathematics Division and at JICS. Prior to my involvement in science, I worked at IBM as a financial and comptroller data analyst.
How did you first hear about the lab, and what made you want to work here?
I heard about ICL from our assistant director, Joan Snoderly. She contacted me and arranged a Zoom call for me to meet the research team. I was (and still am) impressed by the exceptional work pursued by ICL’s researchers and the expertise of the staff.
What is your focus here at ICL? What are you working on?
As associate director, I work with ICL’s administration in the execution of ICL’s strategic plans. I also work with George Bosilca on a DOE research project focused on HPC development for radiochemical applications.
What are your interests/hobbies outside of work?
I like playing piano and various athletics activities. I also like to read, especially when I can hang out with my kitty. When I live in coastal cities, one of my favorite activities is to workout at the beach.
Tell us something about yourself that might surprise people.
My upbringing was very artistic. My first job was at a performance of the opera La Boheme, in Buenos Aires, Argentina; I was 8 years old. My first degree was in arts when I was 16.
If you weren’t working at ICL, where would you like to be working and why?
This is a hard question! I would likely be in some type of scientific role involving HPC in nuclear or radiochemical applications. It is also possible that I would be in a more traditional professor role. Although, if I had a chance, I would probably be involved in programs to increase mentoring to children in underserved communities – particularly those who would be first generation college students – to connect them to opportunities in STEM.































