News and Announcements
25 Years of Innovative Computing

As many of you know, the 2014-2015 academic year marks ICL’s 25th year of Innovative Computing. To commemorate this milestone, ICL is hosting a 25th anniversary workshop on April 1 – 2, with a welcome reception on March 31 and a banquet on April 1. The workshop will feature presentations from our former group members, and the reception and banquet are open to family and spouses. Jack and Sue Dongarra have also been gracious enough to host a dinner at their house on April 2nd at 6:30pm.
Although we do not have an agenda at this time, we have a rough schedule to assist in planning.
| Draft schedule | |
|---|---|
| March 31 | Welcome Reception, Downtown Hilton, 6:00pm |
| April 1 | 25 years of Innovative Computing, UT Conference Center, Henley Street, 9:00am-5:00pm |
| April 1 | Banquet celebrating 25 years, UT’s Neyland Stadium Skybox, 6:00pm |
| April 2 | 25 years of Innovative Computing, UT Conference Center, Henley Street, 9:00am-5:00pm |
| April 2 | Dinner at Jack and Sue’s House, Dongarra Residence in Oak Ridge, 6:30pm-9:00pm |
Since we have added an event, we would like to get a head count for this, and give you an opportunity to update your attendance for the other events as well.
Please go to the event site and confirm your attendance, including your guest(s), for each event. All orders and reservations will be based on this information. Please respond by Monday, March 9!
Conference Reports
PPoPP 2015
On February 7, 2015, ICL’s Piotr Luszczek and Hartwig Anzt took part in the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP). PPoPP is a forum for leading papers on the principles and foundations of parallel programming, tools and techniques for parallel programming, and experiences in using parallel programming to solve applications problems. PPoPP itself was co-located with numerous other workshops and had over 500 attendees.
Piotr presented a poster, “Towards Batched Linear Solvers on Accelerated Hardware Platforms,” in the main track, and a paper, “Optimization for Performance and Energy for Batched Matrix Computations on GPUs,” at the Eighth Workshop on General Purpose Processing Using GPUs (GPGPU 8). Hartwig also presented a paper, “Energy Efficiency and Performance Frontiers for Sparse Computations on GPU Supercomputers,” at the 6th International Workshop on Programming Models and Applications for Multicores and Manycores (PMAM) which was held in conjunction with PPoPP.
Recent Releases
PLASMA 2.7.0 Released
PLASMA 2.7.0 is now available. PLASMA (Parallel Linear Algebra Software for Multicore Architectures) is a dense linear algebra package at the forefront of multicore computing, designed to deliver the highest possible performance from a system with multiple sockets of multicore processors. PLASMA achieves this objective by combining state-of-the-art solutions in parallel algorithms, scheduling, and software engineering. Currently, PLASMA offers a collection of routines for solving linear systems of equations, least square problems, eigenvalue problems, and singular value problems.
This new release offers the following updates:
- Parallel tri-diagonal divide and conquer solver for eigenvalue problems. Parallel implementation of the BLAS library is not required for this function anymore, and the PLASMA team recommends using a sequential library for simplicity.
- Many minor bug fixes related to comments on the forum. Unfortunately, some problems with the FORTRAN interface have not yet been fixed.
Note from developer: We strongly recommend using the latest version of the plasma installer, which will automatically download this package and install all other dependencies if required.
Visit the PLASMA software page to download the installer.
MAGMA 1.6.1 Released
MAGMA 1.6.1 is now available. MAGMA (Matrix Algebra on GPU and Multicore Architectures) is a collection of next generation linear algebra (LA) libraries for heterogeneous architectures. The MAGMA package supports interfaces for current LA packages and standards, e.g., LAPACK and BLAS, to allow computational scientists to easily port any LA-reliant software components to heterogeneous architectures. MAGMA allows applications to fully exploit the power of current heterogeneous systems of multi/many-core CPUs and multi-GPUs/co-processors to deliver the fastest possible time to accurate solution within given energy constraints.
The MAGMA 1.6.1 release adds the following new functionalities:
- Building as both shared and static library is default now. Comment out FPIC in make.inc to build only static library
- Added max norm and one norm to [zcsd]lange
- Extended {sy|he}mv and {sy|he}mv_mgpu implementation to upper triangular
- Fixed memory access bug in {sy|he}mv_mgpu, used in {sy|he}trd_mgpu
- Fixed errant argument check in laswp, affecting getrf_mgpu
- Fixed tau in [cz]gelqf, which needed to be conjugated
- Fixed workspace size in symmetric/Hermitian eigenvalue solvers
- Made fast magmablas_zhemv default in symmetric/Hermitian eigenvalue solvers
(previously needed to define -DFAST_HEMV option) - Added FGMRES for non-constant preconditioner operator
- Added backward communication interfaces for SpMV and preconditioner passing the vectors on the GPU
- Added function to generate cuSPARSE ILU level-scheduling information for a given matrix
- Adding the batched QR routine
- Performance improvements of all batched routines
- Fixing “nan” output for batched factorizations
Support for the new Tesla K80 “GK210-Duo” is provided through MAGMA’s multiGPU routines (see the MAGMA LU Benchmark on up to four K80s).
Visit the MAGMA software page to download the tarball.
MAGMA MIC 1.3.1 Released
MAGMA MIC 1.3.1 is now available. This release provides implementations for MAGMA’s one-sided (LU, QR, and Cholesky) and two-sided (Hessenberg, bi- and tridiagonal reductions) dense matrix factorizations, as well as linear and eigenproblem solver for Intel Xeon Phi Co-processors. More information on the approach is given in this presentation.
The MAGMA MIC 1.3.1 release adds the following:
- Added orthogonal transformations routines
{zun|cun|dor|sor}mbr
{zun|cun|dor|sor}mlq
{zun|cun|dor|sor}mql - Added SVD routine using divide and conquer algorithm
{z|c|d|s}gesdd - Performance optimizations for the two-sided factorizations
(reductions to bidiagonal, tridiagonal, and upper Hessenberg) - Added zscal, hemv for CPU, copy functions
- Added LDLt without pivoting
- Added hybrid solver for symmetric indefinite problems using the Bunch-Kaufman diagonal pivoting method
{zhe|che|dsy|ssy}sv
Visit the MAGMA software page to download the tarball.
Interview

Yaohung Tsai
Where are you from, originally?
I’m from Taiwan, a beautiful island in Southeast Asia. I lived in Hsinchu, a.k.a., the windy city, which is about 50 miles away from Taipei, the capital city.
Can you summarize your educational background?
I earned both my bachelor’s and master’s degrees in mathematics from National Taiwan University. However, most of our professors emphasized pure math. That’s why I considered continuing my studies abroad. I hit a few obstacles that prevented me from joining ICL last fall. Right now I’m a grad student in math (again!) at UT and trying to transfer to the PhD program in CS.
Tell us how you first learned about ICL.
I was using GPUs to do some linear algebra related work. That’s when I found MAGMA and learned about ICL.
What made you want to work for ICL?
ICL is a leader in linear algebra with GPUs, which I’m really interested in. My master’s thesis is related to QR factorization in MAGMA. I also met Jack and Jakub in Taiwan last year, and that really persuaded me to be here and have a chance to work with them.
What are you working on while at ICL?
I’m working on implementing recursive factorizations for LAPACK. I’m also investigating the possible tuning opportunity for convolutional kernels in deep neural networks with Jakub, Piotr, and Blake.
If you weren’t working at ICL, where would you like to be working and why?
I probably would have pursued a software engineering job in Taiwan. I would have still weighed the options of going abroad or staying in Taiwan.
What are your interests/hobbies outside work?
I love playing volleyball. Sometimes I go hiking or cycling. And I play boardgames and computer games.
Tell us something about yourself that might surprise people.
I have never seen snow in Taiwan. The first time I saw snow was during the trip my girlfriend and I took to Europe last year. We went crazy while we saw a small pool of snow on a mountain. So the weather we’ve had in Knoxville lately is a whole new thing for me, but I’m kind of enjoying it so far.






















