News and Announcements

Jack is SC14 Technical Program Chair

ICL’s Jack Dongarra was named the Technical Program Chair for the 2014 Supercomputing Conference (SC14). As Technical Program Chair, Jack will be responsible for recruiting and reviewing contributions from the research community, including papers, posters, workshops, and panels, just to name a few.

The SC Technical Program is highly competitive and one of the broadest of any HPC conference. SC14 will launch new initiatives focused on big data and analytics as well as innovative new technologies in HPC, and every aspect of the SC14 Technical Program will be rigorously peer reviewed.

NICS HPC Seminar Series

The National Institute for Computational Sciences invites you to a Seminar Series on High Performance Computing, every Tuesday and Thursday from 2:10pm to 3:10pm in the NICS conference room in Claxton 351. This is a joint effort between different leadership organizations (NICS, JICS, OLCF, ICL) to increase HPC awareness within the academic community.

Different topics will be introduced starting with the most basic and building up to more advanced topics in HPC. No registration is required for the seminar.

Calendar of topics to be covered in February:

Date Title
4 More on MPI
6 Intro to Archiving Systems: HPSS
11 Using Parallel R
13 Intro to OpenMP
18 More on OpenMP
20 GPU Programming
25 Parallel Linear Algebra
27 Two Case Studies of CUDA Programming and Optimization

Recent Releases

2014 ICL Annual Report

For thirteen years, ICL has produced an annual report to provide a concise profile of our research, including information about the people and external organizations who make it all happen. Please download a copy and check it out.

MAGMA MIC 1.1 Released

MAGMA MIC 1.1 is now available. This release provides implementations for MAGMA’s one-sided (LU, QR, and Cholesky) and two-sided (Hessenberg, bi- and tridiagonal reductions) dense matrix factorizations, as well as linear and eigenproblem solvers for Intel Xeon Phi Coprocessors. More information on the approach is given in this presentation.

The MAGMA MIC 1.1 release adds the following new functionalities:

  • LU, QR, and Cholesky factorizations and solvers with CPU interfaces;
  • Performance improvements for the two-sided reductions to Hessenberg, bidiagonal, and tridiagonal forms;
  • Eigensolvers for symmetric and non-symmetric eigenproblems;
  • SVD routine;
  • Added orthogonal transformation routines;
  • General matrix inversion (routine {z|c|d|s}getri);
  • Performance improvements for the one-sided factorizations using single and multiple MICs.

Visit the MAGMA software page to download the tarball.

clMAGMA 1.1 Released

clMAGMA 1.1 is now available. clMAGMA is an OpenCL port of the MAGMA library. This release adds the following new functionalities:

  • MultiGPU implementations for the LU, QR, and Cholesky factorizations;
  • LU, QR, and Cholesky factorizations and solvers with CPU interfaces;
  • Multi-buffer LU, QR, and Cholesky factorizations that overcome size limitations for single memory allocation, enabling the solution of large problems;
  • Performance improvements.

Visit the MAGMA software page to download the tarball.

MAGMA 1.4.1 Released

MAGMA 1.4.1 is now available. This release provides performance improvements and support for the new NVIDIA Kepler GPUs. More information is given in the MAGMA: A New Generation of Linear Algebra Libraries for GPU and Multicore Architectures presentation as well as the MAGMA Quick Reference Guide.

The MAGMA 1.4.1 release adds the following new functionalities:

  • Improved performance of geev when computing eigenvectors using blocked trevc;
  • Added new CMake installation for compiling on Windows.

Visit the MAGMA software page to download the tarball.

PAPI 5.3.0 Released

PAPI 5.3.0 is now available and includes several enhancements for Intel MIC (Xeon Phi) architectures, including support for offload code in addition to the previously released support for native code. See INSTALL.TXT for details. In addition to offload support, we’ve enhanced support for host-side power reading from MIC and added a utility to aid in plotting the results.

Intel finally admitted that Ivy Bridge supports Floating Point measurement at least as well as Sandy Bridge and added Floating Point events to the official event table. PAPI 5.3 supports them too. See the PAPI topic: Counting Floating Point on Sandy Bridge and Ivy Bridge for details.

The linux-rapl component had a problem with dynamic range. The length of time you could measure was a function of the (random) starting value. This component has been rewritten to ensure access to the full 32-bits of dynamic range, and a test, rapl_wraparound, has been provided to estimate how long you can measure a naive gemm. The cautionary note is that you no longer get an error message on overflow, so you need to check your timings and results for reasonableness. See the PAPI topic: Accessing RAPL for more details.

We made some major changes in the way we handle ctests. First, many of the ctests were built based on outdated configure switches in the makefile(s). We rewrote the tests to determine at runtime whether or not they can run. This may result in more tests executing on your systems than in the past. Enjoy! Next, recognizing the value of our test suite as example code, we restructured the way make install-all works. This option now creates location independent makefiles that will allow you to clone your own copy of the tests directory and modify these tests for your own purposes.

There have been several other bug fixes and enhancements:

  • The Intel Haswell event table now supports PAPI_L1_ICM;
  • AMD Bulldozer now supports Core select masks;
  • The CUDA component now properly reports the number of native events;
  • The command_line utility no longer skips the last event on a list;
  • icc builds no longer add an extraneous -openmp flag.

Visit the PAPI software page to download the tarball.

PLASMA 2.6.0

The PLASMA team has released PLASMA 2.6.0 which integrates several new features as well as updated kernel documentation.

This package contains the following updates:

  • libcoreblas has been made fully independent. All dependencies to libplasma and libquark have been removed. A pkg-config file has been added to ease compilation of projects using the standalone coreblas library.
  • New routines PLASMA_[sdcz]pltmg[_Tile[_Async]], for PLASMA Test Matrices Generation, have been added to create special test matrices from the Matlab gallery. This includes Cauchy, Circulant, Fiedler, Foster, Hadamard, Hankel, Householder, and many other matrices.
  • Added norms computation for triangular matrices: PLASMA_[sdcz]lantr[_Tile[_Async]] and dependent kernels.
  • Doxygen documentation for the coreblas kernels has been updated.
  • Fixed problem reported by J. Dobson from NAG on thread settings modification made in singular values, eigen values routines when MKL is used.

Visit the PLASMA software page to download the tarball. A complete list of changes is available here.

Interview

Tom Cortese Then

Tom Cortese

Where are you from, originally?

I was born in Evanston, Illinois, just north of Chicago. My parents had grown up in the city and always wanted the American dream house in the suburbs, so most of my childhood was spent moving to a different grade school and making new friends every couple of years while my parents searched for that perfect house with the dog in the yard and white picket fence.

Can you summarize your educational background?

High school students seemed to self-divide into “math/science” and “english/history” types; I originally tended towards the former, but am finding history to be increasingly more fascinating as I get older, and I have always loved languages and playing with words. I have always been an avid reader as well, but hated having to write papers in English class. The remainder of my education, at the University of Illinois in Urbana (where I wrote and used parallel spectral-methods turbulent thermal convection code on the CM-5 to investigate underlying physical mechanisms responsible for the generation of wind and tornados) with post-doc stints at the Minnesota Supercomputer Institute (rewrote my code using MPI in order to run on different machines and also modified it to model convection within the earth’s mantle), UIUC’s Air Conditioning and Refrigeration Center (finished another student’s project, studying the effect of the length and attack angle of an array of fins on drag and heat transfer, since he left for a “real job”), and the Heinemann Medical Research Lab at the Carolinas Medical Center in Charlotte (a multi-disciplinary team studying replacement heart valves and determining when arterial walls were likely to burst, causing aneurysms, making it worth the risks of corrective surgery, and when it was safer to just leave them alone), has been a more-or-less constant pull between the low-risk moderate-payoff of engineering and computer science vs. the high-risk high-payoff possibility of becoming a famous musician. What if I had spent all that time practicing instead of studying?

How did you get introduced to ICL?

My first “real job” was with Kuck and Associates, Inc., which was acquired by Intel just a few months later. I was sharing an office with Clay Breshears, who had just left the DoD Programming Environment and Training (PET) program. After a year or so at Intel, the “rough transition period” still hadn’t really settled down, so when Clay told me of an opening in the PET program, I was interested. When it was time for my interview, it happened that Shirley Moore was visiting someone (her father, I think) at Carle Clinic in Urbana, just a few blocks away from my house, so I dressed up in a suit and we discussed the job in the lobby of the hospital.

What did you work on during your time at ICL?

The DoD formed the PET program to act as a liason between domain scientists – people that need to use high-performance computing for their research but aren’t necessarily computer experts; and “onsites” – people that are familiar with a variety of computer platforms and help the scientists use them effectively. Shirley was the leader of the group at ICL; Dave Cronk joined at some point, and I rounded out the team. Even though I was technically part of ICL, I was located at the Stennis Space Center in Mississippi until hurricane Katrina dumped eight feet of muddy water in the house that I was renting, at which point the powers that be were nice enough to let me move to Knoxville.

What are some of your favorite memories from your time at ICL?

The retreats were always a nice time, especially since for me it wasn’t just a short drive to Townsend, but an excuse to fly to another state for a couple of days. Also, everybody at ICL was always nice, willing to assist with difficult projects, etc. – I never felt even a hint of the scheming and behind-the-scenes competition that seems to be prevalent elsewhere. Also, Jack’s annual SC dinners are awesome and a nice personal touch; no other place that I have ever worked has done anything similar.

Tell us where you are and what you’re doing now.

I am at the National Center for Supercomputing Applications in Urbana, IL, and I am one of a handful of people working with various science teams, helping them to make the best use of their allocations on the Blue Waters machine. Assistance that we provide ranges all the way from tracking down why they haven’t received their login tokens to spending several weeks helping them re-write their code to improve performance.

In what ways did working at ICL prepare you for what you do now, if at all?

Actually, what I do at NCSA is similar to what I was doing at ICL, except that the PET program had many different types of computer platforms, whereas at NCSA I am working mainly with the Cray/AMD/NVidia architecture.

Tell us something about yourself that might surprise some people.

I have been both a “bible-thumping” fundamentalist and a “card-carrying” atheist at various points in my life.

Recent Papers

  1. Nelson, J., Analyzing PAPI Performance on Virtual Machines,” VMWare Technical Journal, vol. Winter 2013, January 2014.
  2. Marin, G., Performance Analysis of the MPAS-Ocean Code using HPCToolkit and MIAMI,” ICL Technical Report, no. ICL-UT-14-01: University of Tennessee, February 2014.  (894.39 KB)

Recent Lunch Talks

  1. JAN
    10
    Julien Herrmann
    Julien Herrmann
    Designing LU-QR hybrid solvers for performance and stability PDF
  2. JAN
    17
    Kirk Cameron
    Kirk Cameron
    Virginia Tech
    Power and Energy Whack-a-mole in HPC
  3. JAN
    24
    Blake Haugen
    Blake Haugen
    Latent Semantic Analysis PDF
  4. JAN
    31
    Thomas Herault
    Thomas Herault
    Assessing the Impact of ABFT and Checkpointing Composite Strategies PDF
  5. FEB
    7
    Aurelien Bouteiller
    Aurelien Bouteiller
    Fault Tolerant MPI PDF
  6. FEB
    14
    Yves Robert
    Yves Robert
    Scheduling Data Sensor Retrieval for Boolean Tree Query Processing
  7. FEB
    21
    Simplice Donfack
    Simplice Donfack
    Improving multicore capabilities in hybrid CPUs/GPUs applications (Case of MAGMA) PDF
  8. FEB
    28
    Samuel Thibault
    Samuel Thibault
    INRIA
    StarPU: Task Graphs from Heterogeneous Platforms to Clusters Thereof PDF

Upcoming Lunch Talks

  1. MAR
    7
    Mathieu Faverge
    Mathieu Faverge
    INRIA
    Taking advantage of hybrid systems for sparse direct solvers via task-based runtimes PDF
  2. MAR
    13
    Atsushi Hori
    Atsushi Hori
    RIKEN
    A New Process/Thread Model for Many-core Era
  3. MAR
    21
    Mark Gates
    Mark Gates
    Accelerating eigenvector computation PDF
  4. MAR
    28
    Hartwig Anzt
    Hartwig Anzt
    Optimizing Krylov Subspace Solvers on Graphics Processing Units PDF
  5. APR
    4
    Jakub Kurzak
    Jakub Kurzak
    Some Techniques for Optimizing CUDA More PDF
  6. APR
    10
    Dorian Arnold
    Dorian Arnold
    UNM
    A Simulation-based Framework for Evaluating Resilience Strategies at Scale
  7. APR
    25
    George Bosilca
    George Bosilca
    Toward composite fault management strategies: a quantitative evaluation PDF

Visitors

  1. Mathieu Faverge
    Mathieu Faverge from the University of Bordeaux will be visiting from February 10 through March 8. Mathieu will be back in the lab for a month working with the Linear Algebra group.
  2. Yves Robert
    Yves Robert from ENS Lyon will be visiting from February 10 through February 17. Yves will be back in the lab for a short visit and will be working with the Linear Algebra group.
  3. Tim Mattson
    Tim Mattson from Intel will be visiting on Friday, February 14. Tim will be visiting with Jack and giving a lunch talk.
  4. Emmanuel Agullo
    Emmanuel Agullo from INRIA Bordeaux will be visiting from February 24 through February 28. Emmanuel will be in the lab for a week working with the Linear Algebra group.
  5. Samuel Thibault
    Samuel Thibault from INRIA Bordeaux will be visiting from February 24 through February 28. Samuel will be working with the Linear Algebra group.

People

  1. Dan Terpstra

    ICL's Dan Terpstra is retiring in March to pursue other projects, including working with Living Waters of the World to install water purification systems in developing countries.

    Dan has been with ICL for 13 years and has led the Performance Analysis team for the last several years. Good luck to you, Dan!

  2. Chunyan Tang
    Chunyan Tang recently joined ICL as a Graduate Research Assistant and will be working with the Distributed Computing Group. Welcome aboard, Chunyan!
  3. Stephen Richmond
    Stephen Richmond joined ICL on February 3rd as a Graduate Research Assistant and will be working with the Distributed Computing Group. Welcome aboard, Stephen!
  4. George Bosilca
    George Bosilca will make his long-anticipated return to ICL on or around March 1, 2014.

Visitors

  1. Mathieu Faverge
    Mathieu Faverge from the University of Bordeaux will be visiting from February 10 through March 8. Mathieu will be back in the lab for a month working with the Linear Algebra group.
  2. Yves Robert
    Yves Robert from ENS Lyon will be visiting from February 10 through February 17. Yves will be back in the lab for a short visit and will be working with the Linear Algebra group.
  3. Tim Mattson
    Tim Mattson from Intel will be visiting on Friday, February 14. Tim will be visiting with Jack and giving a lunch talk.
  4. Emmanuel Agullo
    Emmanuel Agullo from INRIA Bordeaux will be visiting from February 24 through February 28. Emmanuel will be in the lab for a week working with the Linear Algebra group.
  5. Samuel Thibault
    Samuel Thibault from INRIA Bordeaux will be visiting from February 24 through February 28. Samuel will be working with the Linear Algebra group.

congratulations

Mr. and Mrs. Ralph

On December 7, 2013, ICL’s James Ralph married Shannon Ralph née Spickard in a ceremony in Chattanooga, TN. Congratulations to the bride and groom!

james_wedding_cropped

Dates to Remember

ICL Winter Reception

bridgeview_grillThe 2014 ICL Winter Reception has been set for Friday, February 7th, from 5:30-8:30pm, at the Bridgeview Grill on Neyland Drive.