News and Announcements
Supercomputing Frontiers and Innovations
ICL director Jack Dongarra has teamed up with Vladimir Voevodin of Moscow State University to create a peer reviewed journal called Supercomputing Frontiers and Innovations (SuperFrI). One of the unique aspects of the journal is that the publications contained therein are open access, meaning anyone with internet access can view, read, and download each issue of the journal. This open access model will facilitate rapid distribution of high quality papers, letters, and reviews to drive further progress in the important and rapidly developing field of supercomputing.
The first issue is now available on the SuperFrI website.
- Toward Exascale Resilience: 2014 update, Franck Cappello, Al Geist, William Gropp, Sanjay Kale, Bill Kramer, Marc Snir
- Runtime-Aware Architectures: A First Approach, Mateo Valero, Miquel Moreto, Marc Casas, Eduard Ayguade, Jesus Labarta
- Towards a performance portable, architecture agnostic implementation strategy for weather and climate models, Oliver Fuhrer, Carlos Osuna, Xavier Lapillonne, Tobias Gysi, Ben Cumming, Mauro Bianco, Andrea Arteaga, Thomas Christoph Schulthess
- Communication Complexity of the Fast Multipole Method and its Algebraic Variants, Rio Yokota, George Turkiyyah, David Keyes
- Model-Driven One-Sided Factorizations on Multicore Accelerated Systems, Jack Dongarra, Azzam Haidar, Jakub Kurzak, Piotr Luszczek, Stanimire Tomov, Asim YarKhan
- Exascale Storage Systems — An Analytical Study of Expenses, Julian Martin Kunkel, Michael Kuhn, Thomas Ludwig
Conference Reports
VECPAR 2014
On June 30th – July 3rd, several members of the ICL team descended upon Eugene, Oregon for the 11th International Meeting on High Performance Computing for Computational Science (VECPAR 2014). VECPAR provides an opportunity for researchers and practitioners of computational science to gather and discuss techniques and technologies that can contribute to the effective analysis of complex systems and physical phenomena.
The first day of VECPAR consisted of tutorials, including a tutorial on Trilinos—the linear algebra framework from Sandia National Lab—which Piotr Luszczek and Mark Gates attended, and a tutorial on the Intel Xeon Phi co-processor. The second day included a workshop on Autotuning.
On the third day of the conference, Ichitaro Yamazaki presented ICL’s work on “Mixed-Precision Orthogonalization Scheme and Adaptive Step Size for CA-GMRES on GPUs,” which won a Best Paper award! Piotr Luszczek presented “Heterogenous Acceleration for Linear Algebra in Multi-Coprocessor Environments,” and was chair of the session on Direct/Hybrid Methods for Solving Sparse Matrices. Stanimire Tomov presented “Self-Adaptive Multiprecision Preconditioners on Multicore and Manycore Architectures,” and Mark Gates presented the paper “Accelerating Computation of Eigenvectors in the Nonsymmetric Eigenvalue Problem,” during the last session of the conference.
International Workshop on Extreme Scale Scientific Computing

Left to right: Nataliya Berezneva, Torsten Hoefler, Thomas Sterling, Jack Dongarra, Alok Choudhary, David Keyes, and Tracy Rafferty
On June 30th – July 1st, ICL’s Jack Dongarra and Tracy Rafferty traveled to Moscow, Russia for the International Workshop on Extreme Scale Scientific Computing. Jack co-chaired the workshop with Vladimir Voevodin from Moscow State University. There were 19 talks at the workshop and 20 invited attendees. Other workshop participants included a number of speakers from Europe, the US, and Asia, in addition to students and staff from Vladimir’s Research Computing Center.
Recent Releases
MAGMA 1.5.0 beta 3 Released
MAGMA 1.5.0 beta 3 is now available. This release provides performance improvements for SVD and eigenvector routines, and adds sparse routines. More information is given in the MAGMA: A New Generation of Linear Algebra Libraries for GPU and Multicore Architectures presentation as well as the MAGMA Quick Reference Guide. The MAGMA 1.5.0 release adds the following new functionalities:
- SVD using Divide and Conquer (gesdd);
- Nonsymmetric eigenvector computation is multi-threaded (trevc3_mt);
- Sparse functions.
Parameters (trans, uplo, etc.) now use symbolic constants (e.g., MagmaNoTrans, MagmaLower) instead of characters (e.g., ‘N’, ‘L’). Converters are provided to translate these to LAPACK, CUBLAS, and CBLAS constants.
Visit the MAGMA software page to download the tarball.
PAPI 5.3.2 Released
PAPI 5.3.2 is now available. This release features several enhancements and bug fixes.
There are a host of component updates to mention:
- NVML component updates;
- Addressed appio memory leaks;
- Support for Haswell-EP added to RAPL component;
- The perf_event_uncore component event enumeration works now;
- All components now have appropriate domain and granularities.
The PAPI team also added support for the Intel Silvermont processor and the Qualcomm Krait. FreeBSD support has also been revamped for FreeBSD 10. Some test codes now fall back to PAPI_TOT_INS in PAPI_FP_INS if not defined. Intel Haswell presets have been refined. x87 instructions were added to the PAPI_FP_INS preset on Intel Sandy/Ivy-Bridge processors.
Visit the PAPI software page to download the tarball.
Interview

Antoine Petitet
Where are you from, originally?
I was born in Paris, France. Paris is an old and large city, known for its sightseeing among other things. As a kid, it probably looks even larger than it really is. The public transportation facilities can, however, rapidly take you from one place to another. I greatly enjoyed growing up there, and after a few years abroad, this is where I live today with my wife Isabelle and my daughter Hortense.
Can you summarize your educational background?
For three years in the late eighties, I went from Paris to Toulouse in the south west of France. There, I earned a computer science degree from the ENSEEIHT engineering school. I spent the last year working at CERFACS, a European research center, in the parallel algorithms group led by Iain Duff. In January 1993, I started as a graduate research assistant at the Computer Science Department of the University of Tennessee, Knoxville, within Jack’s group. I earned a PhD from UTK in 1996 and stayed on as a postdoctoral researcher at ICL until 2001.
How did you get introduced to ICL?
The first time I heard about ICL was at CERFACS. I was working there on a blocked implementation of the Level 3 BLAS for MIMD vector processors, AKA, “shared-memory super-computing in practice.” The Alliant FX/80 and the Convex C220 were my playground. One day Jack came for a visit. During the afternoon, Iain and Jack were having a lively discussion in an office next to mine. The first distributed-memory supercomputers, such as the Intel Hypercubes iPSC/2 and iPSC/860, were just coming out. I did not know at that time that I would have the chance to work on such systems (at ORNL) just a few years later as a UTK/ICL student.
What did you work on during your time at ICL?
During my stay at ICL as a student, I worked on my degree, but I guess this is not the purpose of your question. 😉 I first contributed to the ScaLAPACK project. I then worked on the ATLAS, HPL, and GrADS projects. For various reasons, ScaLAPACK, ATLAS, and HPL will always have a special place in mind. I am certainly very glad to see them still heavily used today by the high-performance computing community. I am also very grateful to have had the chance to contribute to them.
What are some of your favorite memories from your time at ICL?
I certainly have kept great memories of my time at ICL. They are all definitely related to the great and talented people I had a chance to meet while I was there. I certainly remember the conviviality of the Friday lunches. I also remember quite well the trips we took to the various Supercomputing editions and other conferences that we attended. I enjoyed visiting other states and locations all over the country. Even if these trips were quite demanding in terms of preparation, they have always been a source of fun and discovery.
Tell us where you are and what you’re doing now.
I eventually moved back to France in 2001. I worked for Sun Microsystems (Oracle nowadays) for a few years as a benchmark engineer. I then took a job at ESI-Group within the product engineering department. ESI-Group produces manufacturing software tools that help with designing virtual prototypes, i.e., better products, such as safer and lighter cars. The finite element method is often used and thus one needs to solve large linear sparse systems and generalized symmetric eigenvalue problems. Distributed-memory computers are commonly used by our industrial customers, so my job also has a large “high-performance computing” aspect. Nowadays, getting industrial applications—like the ones ESI-Group produces—to efficiently use GPUs or Intel Xeon Phi co-processors is certainly a significant part of my daily job.
In what ways did working at ICL prepare you for what you do now, if at all?
Working at ICL prepared me very well for what I am doing now. In fact, I do use the software I worked on when I was at ICL on a daily basis. Knowing parts of those packages inside and out is a great advantage when building something on top of them. When working for the software industry, having released some freely available software in source form is certainly a highly valuable and useful experience.
Tell us something about yourself that might surprise people.
A couple of months before officially joining UTK/ICL, I came for a “private” visit to Knoxville. It was important for me to see with my own eyes where I had imagined spending the next few years of my life. I certainly remember meeting Jack and the “linear algebra” group in Ayres Hall. Clint Whaley took some time to show me around as well as some useful practical details of UT’s student life. At that point, I was convinced that I would enjoy living there and the years that have followed certainly confirmed this impression.
















