Project Profile
MATEDOR
MAtrix, TEnsor, and Deep-learning Optimized Routines
The MAtrix, TEnsor, and Deep-learning Optimized Routines (MATEDOR) project is performing the research required to define a standard interface for batched operations and provide a performance-portable software library demonstrating batching routines for a significant number of kernels. This research is critical, given that the performance opportunities inherent in solving many small batched matrices often yield more than a 10× speedup over the current classical approaches.
Working closely with affected application communities, along with ICL’s Batched BLAS initiative, MATEDOR defines modular, optimizable, and language-agnostic interfaces that can work seamlessly with a compiler. This modularity provides application, compiler, and runtime system developers with the option to use a single call to a routine from the new batch operation standard and allow the entire linear algebra community to collectively attack a wide range of small matrix or tensor problems.
Sponsored by
- National Science Foundation