News and Announcements

HPCWire Interviews Jack Dongarra

From HPCWire:

HPCwire’s Managing Editor sits down with Jack Dongarra, Top500 co-founder and Distinguished Professor at the University of Tennessee, during SC21 in St. Louis to discuss the latest Top500 list, the outlook for global exascale computing and what exactly is going on in that Viking helmet photo. Plus what’s in store for 2022.

TOP500 Update

Unveiled at this year’s ISC High Performance Computing (ISC-HPC) conference on November 16, 2021, the 58th annual edition of the TOP500 saw little change in the Top10. Fugaku continues to hold the No. 1 position that it first earned in June 2020. Its HPL benchmark score is 442 Pflop/s, which exceeded the performance of Summit at No. 2 by 3x. Installed at the Riken Center for Computational Science (R-CCS) in Kobe, Japan, it was co-developed by Riken and Fujitsu and is based on Fujitsu’s custom ARM A64FX processor. Fugaku also uses Fujitsu’s Tofu D interconnect to transfer data between nodes.

The Microsoft Azure system called Voyager-EUS2 was the only machine to shake up the top spots, claiming No. 10. Based on an AMD EPYC processor with 48 cores and 2.45GHz working together with an NVIDIA A100 GPU and 80 GB of memory, Voyager-EUS2 also utilizes a Mellanox HDR Infiniband for data transfer.

Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, remains the fastest system in the U.S. and at the No. 2 spot worldwide. It has a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each housing two Power9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs, each with 80 streaming multiprocessors (S.M.). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.

Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA, is at No. 3. Sunway TaihuLight is listed at No. 4. Perlmutter at No. 5 was newly listed in the TOP10 in last June. Selene, now at No. 6, is an NVIDIA DGX A100 SuperPOD installed in-house at NVIDIA in the USA. Tianhe-2A (Milky Way-2A) is now listed as the No. 7 system. A system called “JUWELS Booster Module” is No. 8. HPC5 at No. 9 is a PowerEdge system built by Dell and installed by the Italian company Eni S.p.A.

Click here to see how the rest of the TOP500 panned out.

Rank System Cores Rmax (TFLOP/s) Rpeak (TFLOP/s) Power (kW)
1 Supercomputer Fugaku – Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D, Fujitsu,
RIKEN Center for Computational Science,
Japan
7,630,848 442,010.0 537,212.0 29,899
2 Summit – IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband, IBM,
DOE/SC/Oak Ridge National Laboratory,
United States
2,414,592 148,600.0 200,794.9 10,096
3 Sierra – IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband, IBM/NVIDIA/Mellanox,
DOE/NNSA/LLNL,
United States
1,572,480 94,640.0 125,712.0 7,438
4 Sunway TaihuLight – Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway, NRCPC,
National Supercomputing Center in Wuxi,
China
10,649,600 93,014.6 125,435.9 15,371
5 Perlmutter – HPE Cray EX235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB, Slingshot-10, HPE
DOE/SC/LBNL/NERSC
United States
761,856 70,870.0 93,750.0 2,589

Intel OneAPI Center of Excellence

In October, the University of Tennessee, Knoxville, announced the creation of a new Intel oneAPI Center of Excellence to provide solutions in high performance computing (HPC) and visualization using oneAPI. The center will focus on two projects: porting the open-source HPC Ginkgo library to oneAPI for cross-architecture support and expanding its Intel Graphics and Visualization Institute of XeLLENCE to enable high-end visualization as a service through oneAPI.

ICL consultant Hartwig Anzt, heads a project focused on porting the Ginkgo library to oneAPI. This work extends Ginkgo library support to other accelerators—including taking advantage of current and future Intel® Xe GPUs. Anzt will lead the oneAPI Center of Excellence with a team of experienced sparse linear algebra researchers. The center will conduct research and use oneAPI technology to contribute to the open specification and advance ecosystem adoption. The center’s work will help prepare Ginkgo for the Aurora supercomputer at Argonne National Laboratory.

Additionally, Anzt participates on the oneAPI Math Kernel Library (oneMKL) Technical Advisory Board to provide Sparse Linear Algebra expertise and feedback on the oneAPI libraries, compilers, and tools to help support researchers and developers around the world to take advantage of the resulting improvements. Further leveraging the university’s expertise, oneAPI curriculum modules will be developed and included in UT’s coursework and open-sourced to bring oneAPI programming skills to students worldwide in 2022.

Conference Reports

SC21 Returns to In-Person

This year saw the first Hybrid experience for the International Conference for High Performance Computing Networking, Storage, and Analysis (SC21), held in St. Louis, MO on November 14 – 19. After a two-year hiatus from physical attendance the conference hosted more than 3,200 in-person attendees and 160 exhibitors with another 3,350 virtual attendees and 40 virtual exhibitors. Vaccine verification, face masks, and social distancing helped to ensure the safety of attendees and the mix of in-person and virtual events provided an environment that was every bit as creative, inspiring, and meaningful as past SC conferences.

The University of Tennessee groups represented this year were ICL, GCL (Global Computing Laboratory), and the UT Chattanooga Sim Center. Even with the Hybrid nature of the event, ICL was well represented in-person with faculty, research staff, and students giving booth talks, presenting papers and workshops, and leading “Birds of a Feather” sessions.

The dedicated ICL@SC webpage was once again running for the event. There, interested parties could keep tabs on ICL-related events during the conference. The page hosted a list of attendees, detailed schedule of talks, and the latest project handouts.

With the return to in-person attendance, members of ICL and other related attendees were able to meet up for dinner in St. Louis. The group included current and alumni ICL employees, as well as current GCL members, with roughly twenty-five attendees.  Held at the Hilton 360 rooftop on Wed, Nov 17, the dinner provided an atmosphere of camaraderie and cooperation. ICL alumni each took a moment to re-introduce themselves and discuss their current endeavors. It was a great opportunity for attendees to talk about future collaborations and to maintain relationships with those who are now working in the industry outside of ICL.

Interview

Deborah Penchoff Then

Deborah Penchoff

Where are you from?
I was born in Buenos Aires, Argentina.

Can you summarize your educational background?
My training has been very interdisciplinary. At the beginning, my training involved mathematics, computer science, and chemistry. In graduate school, this evolved into applications of HPC and data science to challenges in nuclear- and radio-chemistry (which was largely facilitated by IGMCS). During my postdoc and early scientific career, my training in computer science and data science became more targeted towards needs in national and nuclear security.

Where did you work before joining ICL?
Before joining ICL, I worked at the Institute for Nuclear Security and the Howard H. Baker Jr. Center for Public Policy. Prior to this, I worked at the UTK Radiochemistry Center of Excellence. While I was working on my PhD and shortly after, I also worked in the ORNL Computer Science and Mathematics Division and at JICS. Prior to my involvement in science, I worked at IBM as a financial and comptroller data analyst.

How did you first hear about the lab, and what made you want to work here?
I heard about ICL from our assistant director, Joan Snoderly. She contacted me and arranged a Zoom call for me to meet the research team. I was (and still am) impressed by the exceptional work pursued by ICL’s researchers and the expertise of the staff.

What is your focus here at ICL? What are you working on?
As associate director, I work with ICL’s administration in the execution of ICL’s strategic plans. I also work with George Bosilca on a DOE research project focused on HPC development for radiochemical applications.

What are your interests/hobbies outside of work?
I like playing piano and various athletics activities. I also like to read, especially when I can hang out with my kitty. When I live in coastal cities, one of my favorite activities is to workout at the beach.

Tell us something about yourself that might surprise people.
My upbringing was very artistic. My first job was at a performance of the opera La Boheme, in Buenos Aires, Argentina; I was 8 years old. My first degree was in arts when I was 16.

If you weren’t working at ICL, where would you like to be working and why?
This is a hard question! I would likely be in some type of scientific role involving HPC in nuclear or radiochemical applications. It is also possible that I would be in a more traditional professor role. Although, if I had a chance, I would probably be involved in programs to increase mentoring to children in underserved communities – particularly those who would be first generation college students – to connect them to opportunities in STEM.

Recent Papers

  1. Ayala, A., S. Tomov, M. Stoyanov, A. Haidar, and J. Dongarra, Accelerating Multi - Process Communication for Parallel 3-D FFT,” 2021 Workshop on Exascale MPI (ExaMPI), St. Louis, MO, USA, IEEE, December 2021. DOI: 10.1109/ExaMPI54564.2021.00011
  2. Hori, A., E. Jeannot, G. Bosilca, T. Ogura, B. Gerofi, J. Yin, and Y. Ishikawa, An international survey on MPI users,” Parallel Computing, vol. 108, December 2021. DOI: 10.1016/j.parco.2021.102853  (1.49 MB)

Recent Conferences

  1. NOV
    -
    Alan Ayala
    Alan
    Anthony Danalis
    Anthony
    Aurelien Bouteiller
    Aurelien
    Daniel Mishler
    Daniel
    George Bosilca
    George
    Gerald Ragghianti
    Gerald
    Hartwig Anzt
    Hartwig
    Joan Snoderly
    Joan
    Joseph Schuchart
    Joseph
    Neil Lindquist
    Neil
    Piotr Luszczek
    Piotr
    Qinglei Cao
    Qinglei
    Thomas Herault
    Thomas
    Yu Pei
    Yu
    Alan Ayala, Anthony Danalis, Aurelien Bouteiller, Daniel Mishler, George Bosilca, Gerald Ragghianti, Hartwig Anzt, Joan Snoderly, Joseph Schuchart, Neil Lindquist, Piotr Luszczek, Qinglei Cao, Thomas Herault, Yu Pei

Upcoming Conferences

  1. JAN
    -
    Master of HPC Trieste, Italy
    Piotr Luszczek
    Piotr
    Piotr Luszczek
  2. FEB
    Deborah Penchoff
    Deborah
    Deborah Penchoff
  3. FEB
    -
    EPEXA Meeting Roanoke, Virginia
    George Bosilca
    George
    Joseph Schuchart
    Joseph
    Thomas Herault
    Thomas
    George Bosilca, Joseph Schuchart, Thomas Herault
  4. FEB
    -
    SIAM Conference on Parallel Processing for Scientific Computing (PP22) Seattle
    Ahmad Abdelfattah
    Ahmad
    Aurelien Bouteiller
    Aurelien
    Jack Dongarra
    Jack
    Mark Gates
    Mark
    Natalie Beams
    Natalie
    Neil Lindquist
    Neil
    Piotr Luszczek
    Piotr
    Stanimire Tomov
    Stan
    Ahmad Abdelfattah, Aurelien Bouteiller, Jack Dongarra, Mark Gates, Natalie Beams, Neil Lindquist, Piotr Luszczek, Stanimire Tomov

Recent Lunch Talks

  1. NOV
    5
    Grzegorz Kwasniewski
    Grzegorz Kwasniewski
    ETH Zurich
    From graph pebbling to I/O optimal and high-performance code PDF
  2. NOV
    12
    Mohsen Mahmoudi-Aznaveh
    Mohsen Mahmoudi-Aznaveh
    Texas A&M
    Paru: Parallel Unsymmetric Multifrontal sparse LU factorization PDF
  3. DEC
    3
    Anthony Danalis
    Anthony Danalis
    SDE library internals a.k.a watching the making of sausage
  4. DEC
    10
    Daniel Mishler
    Daniel Mishler
    HHL: a Quantum Algorithm for Exponential Speedup on Systems of Linear Equations PDF

Upcoming Lunch Talks

  1. JAN
    7
    Qinglei Cao
    Qinglei Cao
    Dense, Mixed-Precision and Tile Low-Rank GEMM and Cholesky on Fugaku Using PaRSEC PDF
  2. JAN
    14
    Mark Gates
    Mark Gates
    Parallel divide & conquer eigenvector computation in SLATE PDF
  3. JAN
    21
    Sameer Deshmukh
    Sameer Deshmukh
    O(N) distributed dense solvers PDF
  4. JAN
    28
    Giuseppe Congiu
    Giuseppe Congiu
    Extending PAPI System Detection Capabilities PDF
  5. FEB
    4
    Asim Yarkhan
    Asim Yarkhan
    A Retrospective of Runtimes (@ICL) Or How SLATE gets going PDF
  6. FEB
    11
    Neil Lindquist
    Neil Lindquist
    Improving the Performance of LU Factorization Through Threshold Pivoting PDF
  7. FEB
    18
    Maksim Melnichenko
    Maksim Melnichenko
    An Update on Templates for Randomized Numerical Linear Algebra & Randomized LAPACK Effort PDF
  8. FEB
    25
    Laura Grigori
    Laura Grigori
    INRIA Paris
    Recent advances in randomization techniques for solving linear systems of equations

Open Positions at ICL

UTK Mathworks Professorship in Scientific Computing

The Department of Electrical Engineering and Computer Science (EECS) at The University of Tennessee, Knoxville (UTK) invited candidates to apply for an Endowed MathWorks Professorship in Scientific Computing with a tenure track faculty position at the associate or full professor level.  MathWorks has contributed financially to ICL for several years. MathWorks has made an additional contribution to endow the position. ICL’s relationship with the MathWorks goes back to before the company began. MathWorks founder Cleve Moler worked with Jack Dongarra on LINPACK, a Fortran software library for performing numerical linear algebra on computers and supercomputers in the 1970s and early 1980s. ICL is hoping to fill the MathWorks Professorship with someone who can carry on the development of mathematical software by the summer of 2022 for a start in the fall 2022 semester.