News and Announcements

ICL 25th Anniversary

This slideshow requires JavaScript.

Click here to view/download the full album. Thanks to Mathieu and Tomo for their photo contributions.


On March 31 – April 2, the Innovative Computing Laboratory marked its 25th anniversary with the “25 Years of Innovative Computing” workshop. The workshop, held at the University of Tennessee Conference Center, included 45 talks by ICL alumni across 2 days. The participants, around 90 alumni in total, came from all over the globe—including attendees from as far away as Japan and Saudi Arabia—to share their current interests and activities and to celebrate their time at ICL.

Catching up with old friends and colleagues was easy to do with plenty of time to reconnect and remember. A reception on March 31st kicked off the event, and a banquet in Neyland Stadium’s East Skybox followed on April 1st. The evening of April 2nd brought things to a close at the Dongarra residence where ICLers were literally bused in for good food and fellowship—and libations!

An inherent connection—a connection of interests, a connection of origin—exists between every member of ICL, past and present, and the 25th anniversary—above all else—showed just how important these connections are and just how important the people of ICL are to each other. Here’s to another great year at the Innovative Computing Laboratory.

SC15 Deadlines

It’s that time of year again, and the Supercomputing ’15 due dates for tutorials, panels, and technical papers are right around the corner. Plan your work accordingly!

A full list of due dates can be found here.

Conference Reports

SIAM Conference on Computational Science & Engineering

A strong contingent of ICLers descended upon the SIAM conference on Computational Science and Engineering (SIAM CSE) in Salt Lake City, Utah on March 14 – 18. The SIAM CSE conference seeks to enable in-depth technical discussions on a wide variety of major computational efforts on large-scale problems in science and engineering, foster the interdisciplinary culture required to meet these large-scale challenges, and promote the training of the next generation of computational scientists.

ICL was strongly represented with a number of posters and talks. Azzam Haidar presented, “Efficient Eigensolver Algorithm on Accelerator-Based Architecture,” while Piotr Luszczek presented, “Algorithmic Selection, Autotuning, and Scheduling for Accelerator-Based Codes for Numerical Linear Algebra.” Piotr was also a co-organizer of a 3 part mini symposium, “Innovative Algorithms for Eigenvalue and Singular Value Decomposition” along with Azzam Haidar and Stanimire Tomov.

Hartwig Anzt presented two posters, “Radical Optimization Techniques for Asynchronous Iterative Algorithms on GPUs,” and “Experiences in Autotuning Linear Algebra Operations for Energy Minimization on GPUs.” He would have likely presented more, given the opportunity. To round things off, Ichitaro Yamazaki presented, “Performance of Computing Low-Rank Approximation on Hybrid CPU/GPU Architectures,” and George Bosilca gave two talks, “PaRSEC: Distributed Task-based Runtime for Scalable Hybrid Applications,” and “Building Blocks for Resilient Applications.”

Supercomputing Frontiers 2015

ICL’s Jack Dongarra gave a keynote talk at the inaugural Supercomputing Frontiers conference in Singapore on March 17th through March 20th. Supercomputing Frontiers, organized by A*STAR Computational Resource Centre (A*CRC), aims to gather prominent global supercomputing practitioners and leaders who can share and explore outstanding problems and achievements in HPC, Big Data, and extreme scale computing.

Jack gave a keynote talk for the conference, “Current Trends in Parallel Numerical Computing and Challenges for the Future.” VR World caught up with Jack after his keynote and asked him his opinion on the future of extreme scale computing:

VR World: During your keynote you mentioned the ‘exascale challenge’. In your opinion, how do we get there from here? What has to happen?

Jack Dongarra: We can’t use today’s technology to build that exascale machine. It would cost too much money, and the power requirements would be way too much. It would take 30 Tianhe-2 clusters in order to get there. We have to have some way to reduce the power and keep the cost under control.

Today, all of our machines are over-provisioned for floating-point. They have an excess floating-point capability. The real issues are related to data movement. It’s related to bandwidth. For example, you have a chip. And this chip has increasing computing capability — you put more cores on it. Those cores need data, and the data has to come in from the sides. You’ve got area that’s increasing due to the computing capability but the perimeter is not increasing to compensate for it. The number of pins limits the data that can go in. That’s the crisis we have.

That has to change. One way it changes is by doing stacking. 3D stacking is a technology that we have at our disposal now. That will allow much more information flow in a way that makes a lot more sense in terms of increasing bandwidth. We have a mechanism for doing that, so we get increased bandwidth. That bandwidth is going to help reduce [power draw] as we don’t have to move data into the chip.

The other thing that’s going to happen is that photonics is going to take over. The data is going to move not over copper lines but over optical paths. The optical paths reduce the amount of power necessary. So that’s a way to enhance the data movement, and to reduce the power consumption of these processors. The chip gets much more affordable, and we can have a chance at turning that computing capability into realized performance — which is a key thing.

In the US, I think we’ll reach exascale in 2022. 2022 is the point where the money will be in place and it’s a question of money. We could build a machine today, but it it would be too expensive. The current thinking is it will be realizable around 2020, and the US is going to be able to deploy the machine in 2022. The money won’t be in place until then, but the technology will be ready ahead of time.

Click here to read the full interview.

Interview

Damien Genet Then

Damien Genet

Where are you from, originally?

I’m another French guy. I was born in Rouen, a small city in Normandy, in the northwest of France.

Can you summarize your educational background?

I earned my bachelor’s degree in Normandy, did 3 years of “classe preparatoire” to join ENSEIRB (National school of electronics, computer science and, radiocommunication), an engineering school in Bordeaux, and graduated with a concentration in computer science (high performance computing specialty). After that, I was employed by the CNRS (the French national institute of scientific research) as a research engineer, and I worked at INRIA Bordeaux Sud-Ouest during this period. I had an opportunity to earn a PhD in the Bacchus team at INRIA Bordeaux Sud-Ouest in September 2010. I defended my thesis in December 2014.

Tell us how you first learned about ICL.

I don’t remember how I first learned about ICL, but I know Mathieu Faverge, Emmanuel Agullo, and they each did a postdoc at ICL a few years ago. I worked with George Bosilca when he spent a year in Bordeaux, and part of the DisCo team visited INRIA over the last few years.

What made you want to work for ICL?

Jack Dongarra and the ICL are famous names in our domain. So trying to do a good postdoc after my not-so-good PhD seemed like a good idea. When I visited ICL in 2014, I enjoyed my stay in Knoxville, so everything fell into place.

What are you working on while at ICL?

My first project will be porting PaRSEC on Argobots. Argobots is a platform developed by Argonne National Laboratory, the University of Illinois Urbana-Champaign, the University of Tennessee Knoxville, and Pacific Northwest National Laboratory.

If you weren’t working at ICL, where would you like to be working and why?

If I were not working here, I would probably be in Switzerland working at EPFL where I have contacts.

What are your interests/hobbies outside work?

I enjoy reading books and comics. I also enjoy movies and video games.

Tell us something about yourself that might surprise people.

I don’t really miss France. And I’m not very talkative, but you should have realized that by the time you read this.

Recent Papers

  1. Donfack, S., J. Dongarra, M. Faverge, M. Gates, J. Kurzak, P. Luszczek, and I. Yamazaki, A Survey of Recent Developments in Parallel Implementations of Gaussian Elimination,” Concurrency and Computation: Practice and Experience, vol. 27, issue 5, pp. 1292-1309, April 2015.  (783.45 KB)
  2. Anzt, H., S. Tomov, and J. Dongarra, Accelerating the LOBPCG method on GPUs using a blocked Sparse Matrix Vector Product,” Spring Simulation Multi-Conference 2015 (SpringSim'15), Alexandria, VA, SCS, April 2015.  (1.46 MB)
  3. Kabir, K., A. Haidar, S. Tomov, and J. Dongarra, Performance Analysis and Design of a Hessenberg Reduction using Stabilized Blocked Elementary Transformations for New Architectures,” The Spring Simulation Multi-Conference 2015 (SpringSim'15), Best Paper Award, Alexandria, VA, April 2015.  (608.44 KB)
  4. Herault, T., A. Bouteiller, G. Bosilca, M. Gamell, K. Teranishi, M. Parashar, and J. Dongarra, Practical Scalable Consensus for Pseudo-Synchronous Distributed Systems: Formal Proof,” Innovative Computing Laboratory Technical Report, no. ICL-UT-15-01, April 2015.  (570.97 KB)

Recent Conferences

  1. MAR
    MPI Forum Portland, OR
    Aurelien Bouteiller
    Aurelien
    Aurelien Bouteiller
  2. MAR
    SUPER all hands meeting San Diego, CA
    Anthony Danalis
    Anthony
    Anthony Danalis
  3. MAR
    SIAM CSE Salt Lake City, UT
    Azzam Haidar
    Azzam
    George Bosilca
    George
    Hartwig Anzt
    Hartwig
    Ichitaro Yamazaki
    Ichitaro
    Piotr Luszczek
    Piotr
    Azzam Haidar, George Bosilca, Hartwig Anzt, Ichitaro Yamazaki, Piotr Luszczek
  4. MAR
    GTC15 San Jose, CA
    Stanimire Tomov
    Stan
    Stanimire Tomov
  5. MAR
    Jack Dongarra
    Jack
    Jack Dongarra

Upcoming Conferences

  1. APR
    Azzam Haidar
    Azzam
    Hartwig Anzt
    Hartwig
    Khairul Kabir
    Khairul
    Azzam Haidar, Hartwig Anzt, Khairul Kabir

Recent Lunch Talks

  1. MAR
    6
    Azzam Haidar
    Azzam Haidar
    Performance Bounds in Symmetric Eigenvector Calculations PDF
  2. MAR
    13
    Audris Mockus
    Audris Mockus from EECS
    Evidence Engineering PDF
  3. MAR
    20
    Anthony Danalis
    Anthony Danalis
    Using PaRSEC to Develop Non-static Applications
  4. MAR
    27
    Yves Robert
    Yves Robert from INRIA
    Voltage Overscaling Algorithms for Energy-Efficient Workflow Computations With Timing Errors PDF

Upcoming Lunch Talks

  1. APR
    10
    Tingxing Dong
    Tingxing Dong
    Batched One-sided Factorizations on Hardware Accelerators Based on GPUs PDF
  2. APR
    17
    Ahmad Ahmad
    Ahmad Ahmad
    GPU Accelerated Memory-bound Linear Algebra Kernels PDF
  3. APR
    24
    Manish Parashar
    Manish Parashar from Rutgers
    Big Data Challenges in Simulation-based Science PDF