News and Announcements
Terry Moore
After 28 years of service and dedication to the Innovative Computing Laboratory (ICL), Terry Moore retired on February 26, 2021. Throughout his time at ICL, Terry was known for his hard work and professionalism, commodious vocabulary, quick wit, and unbounded pursuit of knowledge.
As a central figure and pillar of ICL, Terry left his mark on countless projects—where he helmed the proposal process for many years—as well as on every member of ICL who was fortunate enough to work with him.
In lieu of a more traditional in-person going away ceremony, Jack invited Terry to speak about his experiences at ICL as part of a Friday talk. You can watch Terry’s presentation in the video above. Jack captured all the friendly faces in the Zoom call—pictured below.
Jack also invited ICLers, alum, collaborators, and friends to upload personal messages for a farewell video, which is provided above.
Congratulations on your retirement, Terry! Here’s to your next adventure—and don’t be a stranger.
Employment Opportunities at ICL
ICL is seeking full-time Research Scientists (MS or PhD) to participate in the design, development, and maintenance of numerical software libraries for solving linear algebra problems on large, distributed-memory machines with multi-core processors, hardware accelerators, and performance monitoring capabilities for new and advanced hardware and software technologies.
The prospective researcher will coauthor papers to document research findings, present the team’s work at conferences and workshops, and help lead students and other team members in their research endeavors in ongoing and future projects. Given the nature of the work, there will be opportunities for publication, travel, and high-profile professional networking and collaboration across academia, labs, and industry.
An MS or PhD in computer science, computational sciences, or math is preferred. Background in at least one of the following areas is also preferred: numerical linear algebra, HPC, performance monitoring, machine learning, or data analytics.
For more information check out ICL’s jobs page: http://www.icl.utk.edu/jobs.
NVIDIA A100 DGX

ICL just acquired a DGX A100 compute node as part of an award from NVIDIA. This new system is equipped with 2 AMD 64-core Rome CPUs in a dual-socket configuration with 2 TB of system memory flanked by 8 state-of-the-art NVIDIA A100 (Ampere) GPUs with a total of 640 GB of GPU memory.
The new A100 GPUs have a set of unique processing cores (Tensor Cores) that can utilize NVIDIA’s Tensor Float 32 arithmetic for significant computational speedups in applications like machine learning and AI without sacrificing the accuracy of more traditional floating point arithmetic.
The new DGX A100 will join ICL’s machine rack alongside an NVIDIA DGX-1 compute node, which was awarded in 2019. These awards stem from a longstanding relationship with NVIDIA, including ICL’s designation as an NVIDIA CUDA Center of Excellence.
Conference Reports
SC20
As with many workshops and conferences this past year, the International Conference for High Performance Computing Networking, Storage, and Analysis (SC20) moved to an all-digital platform for 2020. Expanding the conference dates an additional week to accommodate this new medium, SC20 ran from November 9 to November 19.
And even though many of the talks at SC20 are behind the conference’s paywall, ICL hosted Jack’s famous HPC “booth talk,” which is embedded above for your viewing pleasure.
@Supercomputing is everywhere we are! #morethanhpc #sc20 pic.twitter.com/raGM6Zcd3m
— Paula Olaya (@paulaolaya22) November 15, 2020
In spite of the change in format, five computational science research centers from the University of Tennessee—the Bredesen Center, the Global Computing Laboratory, the Innovative Computing Laboratory, the Joint Institute for Computational Sciences, and Chattanooga’s SimCenter—represented the university by anchoring the University of Tennessee’s virtual booth. UTK’s local contingent painted the “Rock” to commemorate the occasion.
As usual, ICL had a significant presence at SC, with faculty, research staff, and students giving talks, hosting tutorials, presenting papers, and leading “Birds of a Feather” sessions through SC20’s web conferencing platform. In lieu of the traditional “ICL alum dinner,” Jack hosted members of ICL and GCLab in a virtual cocktail hour.
ICL once again ran a dedicated ICL@SC webpage, where interested parties could keep tabs on ICL-related events—including a list of attendees, detailed schedule of talks, and the latest project handouts.
Plans for SC21 are already underway. Watch this space for details on ICL’s participation.
Recent Releases
MAGMA 2.5.3 Released
MAGMA 2.5.4 is now available. Matrix Algebra on GPU and Multicore Architectures (MAGMA) is a collection of next-generation linear algebra (LA) libraries for heterogeneous architectures. The MAGMA package supports interfaces for current LA packages and standards (e.g., LAPACK and BLAS) to allow computational scientists to easily port any LA-reliant software components to heterogeneous architectures.
Changes for MAGMA 2.5.4 include:
- Support for CUDA 11;
- Support for Ampere GPUs;
- New routine: add
trmmin all precisions (needed for hipMAGMA); - New routine: add
sidiroutine in real precisions to compute inertia for symmetric indefinite matrices; - New routine: GPU interfaces to
hetrfin all precisions; - New routine:
magmablas_Xdiinertiato compute the inertia of a diagonal of a matrix in the GPU memory; - Bug fixes in
herkandsytrd; - Bug fixes in ranged eignesolver testers and fallback calls for small matrices; and
- Performance improvement for Symmetric/Hermitian eigensolvers.
Click here to download the tarball.
2021 ICL Annual Report
For 20 years, ICL has produced an annual report to provide a concise profile of our research, including information about the people and external organizations who make it all possible.
Please download a copy and check it out. Printed copies are available in Claxton 203.
You can also view all of our past reports here.
Interview

Wissam Sid Lakhdar
Where are you from, originally?
Algeria, in North Africa.
Can you summarize your educational background?
I did my studies in France, starting as an undergrad in mathematics in Marseilles, followed by a Master’s degree in computer science and applied mathematics at ENSEEIGHT in Toulouse. I then finished with a PhD in computer science at ENS Lyon with Dr. Jean-Yves L’Excellent. My field of research there was sparse-direct methods and asynchronism in shared-distributed memory systems. This was in the context of the MUltifrontal Massively Parallel sparse direct Solver (MUMPS).
Where did you work before joining ICL?
After my studies, I did a first postdoc at Texas A&M University in College Station Texas with Prof. Tim Davis, where I worked on batched QR factorizations for GPUs.
Then, I did a second postdoc at Lawrence Berkeley National Laboratory in California with Dr. Sherry Li, where I developed the GPTune autotuner targeting exascale applications.
How did you first hear about the lab, and what made you want to work here?
I had collaborated with several people in ICL’s linear algebra team in the past, especially in the context of the SparseKaffe NSF project that spanned UTK, A&M, and the University of Florida. My field of expertise aligns perfectly with the expertise in this team, so I felt that this was a natural fit.
What is your focus here at ICL? What are you working on?
Currently, I am continuing my effort on the GPTune software for autotuning some of ICL’s software packages—starting with PLASMA and moving to SLATE and others later on—all in the context of the NSF-sponsored Basic ALgebra LIbraries for Sustainable Technology with Interdisciplinary Collaboration (BALLISTIC) project.
In parallel, I am developing a version of the QR factorization that has the potential to fix the performance issues of QR factorizations with pivoting while keeping some of their numerical stability properties.
What are your interests/hobbies outside of work?
I enjoy traveling as often as I can and going on new hikes when possible. I am thus eager to discover the trails in the Smoky mountains.
Tell us something about yourself that might surprise people.
I learned how to drive stick shift on the fly, out of necessity, through a 3-hour drive, in the dark of night, in the middle of nowhere, with a friend (who couldn’t drive stick shift either), as that car was the only one available. It was either that or spend the night there.
If you weren’t working at ICL, where would you like to be working and why?
Research is what I am looking for, so an academic setting is what I want. Given that I enjoyed Texas A&M a lot, both because of the nice work environment and the small-city vibe, I would probably work there as a research scientist.







































