News and Announcements
TITAN X at ICL

On May 15th, a few enthusiastic ICLers unboxed a new NVIDIA GTX TITAN X GPU (Maxwell GM200). This commodity graphics card, unlike many of the other GPUs and co-processors we currently have in the ICL arsenal, is a consumer grade gaming card and claims the top spot in the GTX/GeForce 900 series of NVIDIA GPUs. We also anxiously await the arrival of more NVIDIA cards, including enterprise/compute versions of the Maxwell architecture.
The MAGMA team is interested in how this TITAN X GPU will compare to the last generation’s Kepler architecture in terms of performance and power efficiency. This will be particularly interesting considering the double precision performance of the new (Maxwell) TITAN X card is considerably lower than the original (Kepler) TITAN card—192 GFLOP/s vs. 1500 GFLOP/s, respectively—due to having fewer dedicated FP64 CUDA Cores on its 601 mm2 die.
| GTX TITAN X GPU Specs | |
| Base Clock | 1000 MHz |
| Boost Clock | 1075 MHz |
| CUDA Cores | 3072 |
| TMUs | 192 |
| ROPs | 96 |
| Transistors | 8,000,000,000 |
| Memory Specs | |
| Memory Clock | 7.0 MT/s |
| Bandwidth | 336.5 GB/s |
| Memory | 12 GB |
| Bus Type | GDDR5 |
| Bus Width | 384-bit |
| Processing Power | |
| Single Precision | 6144 GFLOP/s |
| Double Precision | 192 GFLOP/s |
NSF SI2: TESSE Project Funded
The Task-based Environment for Scientific Simulation at Extreme Scale (TESSE) project is a collaborative effort driven by Eduard Valeev (Virginia Tech), Robert Harrison (Stonybrook), and George Bosilca (ICL/UTK). This project will develop a new-generation programming environment that uniquely addresses the needs of emerging computational models in chemistry and other fields, and will allow these models to reap the benefits of tomorrow’s computer hardware. The project will bring scientists closer to predicting properties of matter, and will aid in the design of matter transformations, which are vitally important to the technological leadership and energy security of the U.S. The funding for this project is provided by an NSF-SI2 award starting in April 2015 for a duration of 3 years.
A Note on Security
As many of you know, over the course of the last month or so we have had at least one unwanted guest on the 2nd and 3rd floors of the Claxton building. While we do not believe this person poses any immediate danger, ICLers should be vigilant about locking their office doors and making sure that the common areas are secure–especially before departing for the evening. Also, it is always important to be aware of your surroundings, especially if you are working by yourself after hours.
If you see something out of the ordinary, report it to your supervisor and/or call the UTPD at 865-974-3111 (4-3111 from a campus phone). UTPD is aware of the intrusions and has a suspect; he is not believed to be dangerous. Below is a surveillance snapshot of our “visitor,” taken at approximately 08:16 EDT on 06/03/15.
Recent Releases
PLASMA 2.7.1 Released
PLASMA 2.7.1 is now available. The PLASMA (Parallel Linear Algebra Software for Multicore Architectures) package is a dense linear algebra package at the forefront of multicore computing, designed to deliver the highest possible performance from a system with multiple sockets of multicore processors. PLASMA achieves this objective by combining state-of-the-art solutions in parallel algorithms, scheduling, and software engineering. Currently, PLASMA offers a collection of routines for solving linear systems of equations, least square problems, eigenvalue problems, and singular value problems.
The PLASMA 2.7.1 release adds the following updates:
- A bug fix with an infinite loop in LU recursive panel kernels.
- Update the eztrace module to be compliant with EZTrace 1.0.6.
- Fix the F77 interface to handle Tile descriptor correctly.
- Update the Lapack_to_Tile/Tile_to Lapack routines family to support both in place and out-of-place layout translation.
More details are available in the Release Notes. Visit the PLASMA software page to download the installer (recommended) or the source code.
HPCC 1.5.0a Released
HPCC 1.5.0a is now available. The HPCC (HPC Challenge) benchmark suite is designed to establish, through rigorous testing and measurement, the bounds of performance on many real-world applications for computational science at extreme scale. To this end, the benchmark includes a suite of tests for sustained floating point operations, memory bandwidth, rate of random memory updates, interconnect latency, and interconnect bandwidth. The main factors that differentiate the various components of the suite are the memory access patterns that, in a meaningful way, span the space of memory access characteristics, which is spanned by temporal and spatial locality. The components of the suite are brought together inside HPCC, which allows information to pass between the components and provide a comprehensive testing and measurement framework that goes beyond the sum of its parts.
The 1.5.0a release of HPCC contains the following updates:
- Added global error accounting in STREAM.
- Updated checking to report from multiple MPI processes contributing to overall error.
- Added barrier to make sure all processes enter STREAM kernel tests at the same time.
- Updated naming conventions to match the original benchmark in STREAM.
- Changed scaling constant to prevent verification from overflowing in STREAM.
- Simplified MPI communicator code in STREAM.
- Substituted large constants for more descriptive compile time arithmetic in STREAM.
- Added the “restrict” keyword to the STREAM vector pointers for faster generated code.
- Updated STREAM code to the official STREAM MPI version 1.7.
- Removed infinite loop due to default compiler optimization in DLAMCH and SLAMCH.
- Added compiler flags to allow compiling with a C++ compiler.
Visit the HPCC software page to download the tarball.
Interview

Kevin Ye
Where are you from, originally?
I lived in Kansas until about 6 years ago when I moved to Johnson City, TN.
How would you sum up your education thus far? Any interest in pursuing an advanced degree (MS or PhD)?
I graduated high school a year ago in 2014 and I’m currently working towards a Bachelor’s Degree in Computer Science. UT has a 5 year Master’s program, and since I’m slightly ahead of schedule with my curriculum, I might consider applying for the program and attempting to complete the degree in four years.
How did you find your way into Computer Science?
It took a while for me to realize I wanted to be in CS. In sixth or seventh grade a friend asked me to work on a calculator in C with him for a class project. That was my first experience programming. I enjoyed it, but didn’t pick it up again until junior year in high school, when I took an AP Computer Science course at my high school. The summer after that was also when I went to the Governor’s School of Emerging Technologies. I attended a few lectures about data mining and cyber-security and got to see Titan as well as the supercomputer at UTC. That’s really when I started considering a career in Computer Science. When high school started up again, I dual enrolled in a few Computer Science courses at ETSU and eventually decided this was the path for me. That’s how I made the decision to put Computer Science on my application for UT.
Tell us how you found out about the internship at ICL.
I was emailing around for research positions on campus and I said that I was interested in high performance computing and scientific modeling. Eventually, I got in contact with Dr. Plank and he pointed me in the direction of ICL. A few emails and an interview later and here I am.
What are you working on while at ICL?
I’m working on HPCG, and specifically looking for areas in the code where it can be optimized.
What are your interests/hobbies outside of school/work?
I enjoy video games in my spare time such as League of Legends and several indie games from Steam. I play violin, and I’m a member of a (currently inactive) quartet with two of my friends from high school (we’re at 75% capacity). Other times I bike and explore campus with friends.
Tell us something about yourself that might surprise people.
I mostly stopped watching television about a year ago. I watch eSports and live streams instead.






















ICL alum and collaborator Volodymyr