News and Announcements
DOE Report Calls for Renewed Focus in High-End Mathematics
A recently released DOE report, Applied Mathematics Research for Exascale Computing—co-chaired by ICL’s Jack Dongarra—stresses the importance of prioritizing research into high-end mathematics to help keep the United States on the cutting edge of computing.
Exascale computing (capable of one quintillion floating point operations per second) will enable us to solve problems in ways that are not feasible today and will result in significant scientific breakthroughs. However, the transition to exascale poses numerous scientific and technological challenges.
According to the report, increased funding for the development of new models and ways of gathering data is key to unlocking a number of those challenges, and making a commitment to keeping the United States a leader in the field should be the first task.
The idea is that advances in mathematics will give way to advances in high-performance computing applications, and having a larger number of researchers in applied mathematics, high performance computing, and application sciences work cooperatively can help make these advances and Exascale computing a reality.
Computer scientists, applied mathematicians, and application scientists will all need to work closely together. It will prove vital to produce an environment where we can exploit the computational resources that will be available at the Exascale level.
Yves Robert Receives IEEE TCSC Award for Excellence
ICL alum and visiting scientist Yves Robert will receive the IEEE TCSC Award for Excellence at this year’s IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing (CCGrid) where he will also give a keynote.
This honor is awarded through the IEEE Technical Committee on Scalable Computing (TCSC) for significant and sustained contributions to the scalable computing community, coupled with an outstanding record of high quality and high impact research, and consists of a medal and an honorarium of $1000. Congratulations, Yves!
NICS HPC Seminar Series
The National Institute for Computational Sciences invites you to a Seminar Series on High Performance Computing, every Tuesday and Thursday from 2:10pm to 3:10pm in the NICS conference room in Claxton 351. This is a joint effort between different leadership organizations (NICS, JICS, OLCF, ICL) to increase HPC awareness within the academic community.
Different topics will be introduced starting with the most basic and building up to more advanced topics in HPC. No registration is required for the seminar.
Calendar of topics to be covered in April:
| Date | Title |
|---|---|
| 1 | Using Scientific Libraries |
| 3 | Doing Linear Algebra in Parallel |
| 8 | Parallel I/O Part 1 (Strategies) |
| 10 | Parallel I/O Part 2 (I/O Libraries) |
| 15 | Understanding the MPI framework on XC30 Cray systems |
| 17 | Programming with OpenACC |
| 22 | An overview of Fortran 2003 and 2008 Standards |
| 24 | Art and Science of using Python in HPC |
| 29 | Visualization on HPC |
SC14 Due Dates
Time flies when you’re having fun, and the Supercomputing ’14 due dates for tutorials, panels, and technical papers are right around the corner. Plan your work accordingly!
- April 4, 2014: Submissions due for Technical Paper Abstracts
- April 7, 2014: Submissions due for Tutorials
- April 11, 2014: Extended, April 18, 2014: Full submissions due for Technical Papers
- April 25, 2014: Submissions due for Panels
- July 31, 2014: BOFs, Posters, ACM Student Research Competition, Doctoral Showcase, Emerging Technologies, and most other Technical Program deadlines
A full list of due dates can be found here.
Conference Reports
Big Data and Extreme Scale Computing
The second Big Data and Extreme Scale Computing (BDEC) workshop was held in Fukuoka, Japan on February 26th – 28th at the Centennial Hall Kyushu University School of Medicine. This workshop, the second in a series sponsored by the NSF, is premised on the idea that we must begin to systematically map out and account for the ways in which the major issues associated with Big Data intersect with, impinge upon, and potentially change the national (and international) plans that are now being laid for achieving Exascale computing.
The ICL team helped organize the meeting, which was attended by application leaders confronting diverse big data challenges, alongside members from industry, academia, and government, with expertise in algorithms, computer system architecture, operating systems, workflow middleware, compilers, libraries, languages and applications. Among these attendees was Yoshio Kawaguchi of MEXT, who put forth Japan’s Office for Promotion of Computing Science road map for an Exascale machine.
Overall, the workshop was a great success with 40+ individual talks and panel discussions and over 90 attendees from all over the world. The third and final workshop is being planned and will likely be hosted in Europe.
Interview

Richard Barrett
Where are you from, originally?
West Virginia, 10th generation.
Can you summarize your educational background?
After four years in the Marines, calling in artillery strikes on unsuspecting rocks in the Mojave Desert (thankfully during peacetime) I landed in the oasis of Indiana University. A math graduate teacher encouraged me to take linear algebra, which I did, and which changed my view of the world. (You mean I can measure the distance between points in multi-dimensional space? Cool!) An introductory course in programming (using Pascal), a course in Fortran (using punchcards) and Linear Programming again re-oriented my view of the world. After an unrelated job post-undergrad, I enrolled as a graduate student in math at UTK, intending to become a professor at Marshall University in my hometown.
How did you get introduced to ICL?
Jack gave a lunch pizza talk to the math students: solving the symmetric eigenvalue problem using divide-and-conquer. “If this off-diagonal entry is close enough to zero, we just make it zero, and divide the domain.” The pragmatism of this approach was exactly what I needed.
This motivated me to take Jack’s parallel programming course and I spent a semester at ORNL working for John Drake. In Jack’s class I helped Steve Moulton with the math, and he got me through the CS. I was also aided and abetted by Susan Blackford and Majed Sidani, math students turned CS types. One day I poked my head into Jack’s office at ORNL: “I’m thinking of switching to CS.” “Ok.” “I want to work for you.” “Ok.” “How about support?” “Ok.” And that was that. Two years later, at age 35, I departed with Master’s degrees in math and CS and took a job at Los Alamos. The position claimed to require a PhD, but my preparation at the ICL apparently qualified.
What did you work on during your time at ICL?
Victor Eijkhout had just joined the group as a postdoc, so Jack paired me with him to study iterative solvers. Which was the genesis of “The Templates” book. The book was written on three continents: Tony Chan of UCLA was visiting Hong Kong, Henk van der Vorst from his university in the Netherlands, and the rest of us in the US. We just circulated the Tarball of LATEX and it worked out. This was my first concrete lesson in the power of collaboration. Satomi and Hidehiko Hasegawa visited soon after, and translated the book into Japanese—an instant Barrett family treasure. The web had just been turned loose a few months before, but a latex2html translator was already available, so our book went online as well.
My thesis was originally suggested by Mary Wheeler, then of Rice University: since iterative solvers for non-symmetric linear systems are not guaranteed to converge, why not apply multiple methods to the same system, amortizing the communication costs where possible? Algorithmic bombardment. I learned about newly emerging non-symmetric solvers, computer arithmetic, inter-process communication, a variety of computer architectures, and other things, which prepared me for Los Alamos.
What are some of your favorite memories from your time at ICL?
Clint Whaley, Susan, Majed, Steve, Victor, and I shared an office in Ayres Hall. The day-to-day interactions with people driven to understand things, the laughter, tears, occasional argument, and the hard work of daily life was a special time. Amazing visitors would just appear: I eavesdropped while Roger Hockney sat with Steve for a couple of days; a magic hour at the chalkboard with Henk and his amazing insights into Krylov subspaces; Gene Golub told us we’d better really understand the power of orthogonal polynomials. Aye-aye, sir!
One day we received an email from Jack, telling us he wanted us to think up a name for our group, and he suggested ICL. We came up with a few hundred alternatives. So we became the ICL.
Jack thought I was always at work. I arrived at 7:20am, Jack at 7:30. Jack departed at 5:30, I departed at 5:31. Most others arrived around the crack of noon, went to lunch/dinner at 6-ish, and worked until 4:00am. So I was “always” there… afahk. Of course one way or the other we all worked and studied almost all of the time, just like you do there today.
Tell us where you are and what you’re doing now.
I lead the Application Performance Analysis, Modeling, and Tuning team in the Center for Computing Research at Sandia National Laboratories in New Mexico, and with Mike Heroux co-lead the Mantevo project, which won an R&D 100 award last year. Our goal is to understand the characteristics and capabilities of new computing architectures within the context of Sandia mission application programs. So algorithms, programming models, mechanisms, and languages, runtime systems, microarchitectures, node interconnects, and so on, are all considered.
In what ways did working at ICL prepare you for what you do now, if at all?
General: Collaborate. Teamwork. Ask questions. Poke at it. Ask more questions. Poke at it some more. Show people your work early and often. Embrace the grind. Embrace change.
Specific: My first code in Jack’s class multiplied two matrices using Strassen’s method. I nearly fell out of my chair when it achieved 40 MFLOPS on an Alliant FX-8. Later, with a fellow staff member from LANL and a now life-long friend from Sandia, we camped out for three days in the machine room of the first TFLOPS computer in order to run the first production scientific problem at that scale. Later, I led the effort to run LINPACK on the first general purpose PFLOPS machine (Jaguar@ORNL), this mostly from the comfort of home, with a cast of many scattered around the lab and town. Now we are in the middle of the push to EFLOPS. Regardless of the scale, my approach is always informed by the context of production codes. I want my stuff used to move scientific understanding forward. And the parallel processing fundamentals learned on the PVM’ed student lab workstations in the basement of Ayres are essentially the same today.
Tell us something about yourself that might surprise some people.
Our family of 5 lived in a small, unfinished garage for two years in the middle of rural New Mexico while we designed and built our house of dirt (adobe). Outhouse, primitive bathing, critters of all sorts, yet, of course we had wireless internet—hey, we’re not barbarians. The first night our new home was ready, my wife made us go camping…

















