Conference Reports
Sparse Days
On June 28th through July 2nd, ICL’s Jack Dongarra acted as a general chair for the Sparse Days Workshop in Saint Girons, Ariege, France. The third such workshop in three decades, Sparse Days III included sessions on low-rank approximations for high performance sparse solvers, innovative clustering methods for large graphs and block methods, parallel methods for time dependent problems, and advances in optimization with application to data assimilation, among other things.
The first two Sparse Days workshops were each held in Saint Girons in 1994 and 2003, respectively. This most recent seminar featured 41 individual talks with over 100 attendees—including several ICL alumni and frequent collaborators: Julien Langou, Mathieu Faverge, Jim Demmel, Julien Herrmann, Alfredo Buttari, Florent Lopez, Theo Mary, and Cleve Moler, just to name a few. Stay tuned for Sparse Days IV, coming soon in ~2024.
TESSE Kickoff Meeting
On July 1, 2015 ICL hosted a kickoff meeting for the newly NSF-funded TESSE project. TESSE (Task-based Environment for Scientific Simulation at Extreme Scale) is a collaborative effort between multiple institutions and is led by Eduard Valeev (Virginia Tech), Robert Harrison (Stonybrook), and George Bosilca (ICL/UTK). TESSE aims to develop a new-generation programming environment to uniquely address the needs of emerging computational models in chemistry—and other fields—and allow these models to reap the benefits of tomorrow’s computer hardware.
As part of the kickoff meeting, Eduard Valeev and Torsten Hoefler both gave presentations at ICL in a rare, but welcome, Wednesday Lunch Talk seminar. We look forward to hearing more from the TESSE group in this ongoing collaboration.
MPI Forum
On June 1st through June 4th, 2015, ICL’s Aurelien Bouteiller participated in the MPI Forum meeting in Chicago, Illinois. MPI 3.1 was voted on and ratified by the forum and is now part of the official standard. Some familiar faces were alongside Aurelien, including Wesley Bland and Bill Gropp. The official MPI 3.1 document is available here.
Recent Releases
NVIDIA Optimizes HPCG
NVIDIA recently released a version of the HPCG benchmark that they optimized for their GPUs. HPCG (High Performance Conjugate Gradients) is a benchmark designed to measure performance representative of modern scientific applications by exercising the computational and communication patterns that are commonly found in real science and engineering codes—which are often based on sparse iterative solvers. Intended as a candidate for a new HPC metric, HPCG implements the preconditioned conjugate gradient algorithm with a local symmetric Gauss-Seidel as the preconditioner.
It makes sense that NVIDIA—a major player in both enterprise and consumer computing markets—would wish to have HPCG run efficiently on their GPU hardware, and NVIDIA’s optimized version showcases the relevance of the HPCG benchmark in today’s heterogenous high performance computing environments. A new batch of HPCG benchmark scores is expected to be released at ISC’15 in July, and NVIDIA will likely submit scores from their optimized code. Stay tuned.
Visit the HPCG software page to download the newly optimized release.
PaRSEC / DPLASMA 2.0.0-rc1
PaRSEC 2.0.0-rc1 is now available. PaRSEC (Parallel Runtime Scheduling and Execution Controller) is a generic framework for architecture-aware scheduling and management of micro-tasks on distributed many-core heterogeneous architectures.
Compared with previous iterations, this release candidate of the PaRSEC’s next major version includes countless new features, improved performance, and enhanced support for accelerators. Moreover, PaRSEC 2.0.0 comes bundled with a new DPLASMA library that supports more P-BLAS functionality with better performance.
Visit the PaRSEC software page to download the tarball.
Interview

Sangamesh Ragate
Where are you from, originally?
I am from Bangalore, a city in southern India.
Can you summarize your educational background?
I earned my Bachelor’s degree in Electronics and Communications Engineering from VTU, India in 2010. I then worked at a private company in Bangalore as a design engineer developing applications for FPGA and embedded processors until 2014. Last August, I came to UTK to pursue my Master’s in Computer Engineering.
Tell us how you first learned about ICL.
I was interested in pursuing an advanced degree with a concentration in High Performance Computing. I was looking for universities that had labs and people working in this domain and ICL naturally kept popping up, so I decided to investigate further.
What made you want to work for ICL?
As I mentioned earlier, I wanted to work in HPC—specifically in Computer Architecture and low-level/kernel programming. The GRA position I found at ICL when I applied to UTK was exactly what I was looking for, and hence I decided to work at ICL.
What are you working on while at ICL?
I work with the Performance Analysis Group as a developer for the PAPI tool.
If you weren’t working at ICL, where would you like to be working and why?
I would have applied to the Texas Advance Computing Center at UT-Austin because I believe they also do a lot of stuff in high performance computing.
What are your interests/hobbies outside work?
I love playing sports. I was a cricket player and played for my school and corporate teams. I recently started playing tennis and racquetball at UTK, since there is no cricket here. I also like listening to music and watching documentaries/movies.
Tell us something about yourself that might surprise people.
If I am having trouble falling asleep at night, I open the Intel software manual and start reading it. I fall asleep within 10 minutes. It works for me. 🙂 Too much raw information!






















