12th JLESC workshop (online)


12th JLESC Workshop

Objectives 

The workshop gathers leading researchers in high-performance computing from the JLESC partners INRIA, the University of Illinois, Argonne National Laboratory, Barcelona Supercomputing Center, Jülich Supercomputing Centre, RIKEN R-CCS and The University of Tennessee to explore the most recent and critical issues in advancing the field of HPC from petascale to the extreme scale era.

The workshop will feature sessions on these eight central topics:

  • Applications and mini-apps
  • Parallel Programming models and runtime
  • Performance tools
  • Resilience
  • Big Data, I/O and in-situ visualization
  • Numerical methods and algorithms
  • Advanced architectures
  • Artificial Intelligence

In addition to these tracks, dedicated sessions targetting more specialized scientific domains are planned. The target domains change for each meeting depending on the needs and interests of the JLESC community. For this meeting the target domains are computational fluid dynamics, computational biology and climate/weather research.

A key objective of the workshop is to identify new research collaborations and establish a roadmap for their implementation.

Most of the workshop is open to all participants from the JLESC institutions Illinois, INRIA, ANL, BSC, JSC, Riken R-CCS and UTK; faculties, researchers, engineers and students who want to learn more about Post-Petascale / Pre-Exascale Computing. In addition to the schedule with restricted participation, the 12th JLESC meeting will feature during the last workshop's day, February 26th 2021, as an open day, where attendance is open to anybody interested by any of the workshop related topics.

The OpenDay features 3 invited talks from leaders in the field presented along with 3 success stories from JLESC teams. The OpenDay invited speakers are Prof. Satoshi Matsuoka (Riken), Dr. Lois Curfman McInnes (ANL) and Prof. Torsten Hoefler (ETH).

Location/venue

For the workshop, we will use Zoom, Slack and, in particular, the Gather platform. For the whole event we will make use of a venue/map created by Virtual Chair specifically for this workshop. Using the Gather platform, you will be able to navigate your avatar through the event, meet people and join the talks. The map will be open to JLESC participants from 7am ET - 1pm ET, Feb 24-26, 2021. You can access the venue though the Virtual Chair landing page:

https://www.virtualchair.net/events/jlesc


For the Open Day (8.00-11.30am ET), the link to the Zoom webinar can be found in the plenary room of the Gather map or here:

https://zoom.virtualchair.net/jlesc/Audience/379aMA


Some notes concerning the event:
  • All sessions will be held using Zoom rooms, embedded into the Gather map.
  • All round table discussions will be held "in person" on the map at dedicated meetings points and cannot be accessed from the outside. Use the "locate" feature to find a speaker or colleague on the map and to meet at the meeting points.
  • For the Open Day (Feb 26), all JLESC participants are free to use the Gather map, while all non-JLESC participants will have to use the direct link to the Zoom webinar.
  • Make sure you have access to the JLESC Slack workspace (access to Slack is restricted to participants from JLESC member institutions)
  • Using Slack, we will provide most up-to-date information and here you can contact us directly in case you have any questions or problems.
  • There is at least one staff member in each session room who can help you navigate or enter/exit the sessions. In addition, there is a help desk in the Lobby. You can also contact us at the #help channel on Slack or directly on the map. As a last resort, you can of course write us an email at any time.
  • Please monitor the Slack #announcements channel and your emails closely for updated information.
Some details concerning the platform:
  • Use the email address you provided during registration for logging in to the venue
  • Zoom links to the sessions are inside the different sessions’ rooms in the venue (press "x" to join Zoom when inside the session's room). If you really cannot use the map, contact us to get direct access.
  • When entering Zoom through the map, you will be automatically muted in the map. Make sure there is no feedback loop when being online in both systems. Mute, whenever possible, but don’t forget to unmute if you want to continue.
  • Use the minimap in the lower part of the screen to navigate. On the left, check out the calendar, the chat and the list of participants.
  • Please use a recent version of the Firefox or Chrome desktop browsers for accessing the map. Other browsers may work, but either unreliably or with limited capabilities.
  • In case things do not work as expected (e.g. video or audio fails), please simply reload the browser tab. You’ll be respawned in a few seconds directly where you left.

Agenda 


Track 1 (Location: Room 1) BOS (Location: Plenary)
08:00 ET Opening Remarks (Location: PLENARY)
08:15 ET ST M1.1 Advanced Computing

Session chair: Philippe Swartvagher
Python in Parallel Computing
10:00 ET Break (Activities on zoom and gather.town)
10:15 ET Panel: Open challenges in scheduling for parallel computing (Location: PLENARY)

Moderator: Yves Robert, Inria
Panelists:
- Rosa M. Badia, BSC
- George Bosilca, UTK
- Arnaud Legrand, Inria
- Swann Perarnau, ANL
- Marc Snir, UIUC
- Miwako Tsuji, Riken
11:15 ET Meeting venues (zoom session and gather.town) will remain open until 1PM ET

Track 1 (Location: Room 1) Track 2 (Location: Room 2) BOS (Location: Plenary)
08:00 ET ST M2.1 (6) AI and Applications

Session chair: Daniel Barry
ST M2.2 (6) I/O

Session chair: Daichi Mukunoki
ARM
9:30 ET Break (Activities on zoom and gather.town)
09:45 ET ST M2.3 (6) Performance tools and numerical methods

Session chair: Kevin Sala
ST M2.4 (6) Programming languages and runtimes

Session chair: Ruth Schöbel
Heterogeneous and reconfigurable architectures for the future of computing
11:15 ET Closing Remarks (Location: PLENARY)
11:20 ET Meeting venues (zoom session and gather.town) will remain open until 1PM ET

Track 1 (Location: Plenary)
Zoom link:
08:00 ET Opening remarks
08:15 ET Prof. Satoshi Matsuoka, Fugaku: the first 'Exascale' supercomputer
08:45 ET Dr. Gabriel Antoniu, A Story About Data: Advancing Storage, I/O and Processing at Challenging Scales
09:15 ET Dr. Lois Curfman McInnes, How a Community Software Ecosystem Perspective Helps to Advance Science Goals in the Exascale Computing Project
9:45 ET Break (BYOC: aka. Bring Your Own Coffee)
10:00 ET Dr. Leo Bautista, Resilience for Extreme Scale Computing
10:30 ET Prof. Torsten Hoefler, High-Performance Deep Learning
11:00 ET Dr. Brian Wylie, Developer tools for porting & tuning parallel applications on extreme-scale systems
11:30 ET Closing remarks

Agenda Items


Title Presenter
Fugaku: the first 'Exascale' supercomputer Prof. Satoshi Matsuoka
Riken CCS
Abstract:

Fugaku is the first ‘exascale’ supercomputer of the world, not due to its peak double precision flops, but rather, its demonstrated performance in real applications that were expected of exascale machines on their conceptions 10 years ago, as well as reaching actual exaflops in new breed of benchmarks such as HPL-AI. But the importance of Fugaku is \"applications first\” philosophy under which it was developed, and its resulting mission to be the centerpiece for rapid realization of the so-called Japanese ‘Society 5.0’ as defined by the Japanese S&T national policy. As such, Fugaku’s immense power is directly applicable not only to traditional scientific simulation applications, but can be a target of Society 5.0 applications that encompasses conversion of HPC & AI & Big Data as well as Cyber (IDC & Network) vs. Physical (IoT) space, with immediate societal impact with its technologies utilized as Cloud resources. In fact, Fugaku is already in partial operation a year ahead of schedule, primarily to obtain early Society 5.0 results including combatting COVID-19 as well as resolving other important societal issues and also go into full production in moments time.

How a Community Software Ecosystem Perspective Helps to Advance Science Goals in the Exascale Computing ProjectDr. Lois Curfman McInnes
Argonne National Laboratory
Abstract:

Teams in the U.S. Exascale Computing Project (ECP) are working toward scientific advances on forthcoming exascale platforms, across a diverse suite of applications in chemistry, materials, energy, Earth and space science, data analytics, optimization, artificial intelligence, and national security. In turn, these applications build on software components, including programming models and runtimes, mathematical libraries, data and visualization packages, and development tools that comprise the Extreme-scale Scientific Software Stack (E4S). E4S represents a portfolio-driven effort to collect, test, and deliver the latest in reusable open-source HPC software products, as driven by the common needs of applications. E4S establishes product quality expectations and provides a portal as a starting point for access to product documentation. This presentation will discuss early experiences with how this software ecosystem approach delivers the latest advances from ECP software technology projects to applications, thereby helping to overcome software collaboration challenges across distributed aggregate teams. A key lesson learned is the need for close collaboration between teams developing applications and reusable software technologies, as well as the need for crosscutting strategies to increase developer productivity and software sustainability, thereby mitigating technical risks by building a firmer foundation for reproducible, sustainable science.

High-Performance Deep Learning Prof. Torsten Hoefler
ETH Zurich
Abstract:

Deep Learning is as computationally expensive as the most challenging scientific computing applications. In this talk, we outline the biggest challenges in training deep learning workloads and show how HPC techniques can be used to improve the performance of training workloads. We focus on model sparsity in the training process. This will be even more important once the scientific computing community uses deep learning in their workflows.

A Story About Data: Advancing Storage, I/O and Processing at Challenging Scales Dr. Gabriel Antoniu
INRIA
Abstract:

Looking back over more than 10 years of collaboration within JLESC involving Inria, the University of Illinois at Urbana-Champaign and Argonne National Lab, this talk will highlight a few achievements on hot topics related to data storage, I/O management and in situ visualisation and processing. From these initial challenges in this areas posed by the expected arrival of Exascale systems, new ones emerged as frontiers started to blur between High-Performance Computing and Big Data analytics. We will also discuss upcoming open problems triggered by the increasingly complex workflows that are mixing simulations, analytics and AI, which emphasize new requirements and opportunities created by their potential execution on the HPC/Cloud/Edge computing continuum.

Resilience for Extreme Scale Computing Dr. Leo Bautista
BSC
Abstract:

Resilience has been one of the main research topics of the JLESC since its conception over a decade ago. We have covered multiple types of failures and errors which has led to completely different fault tolerance techniques, some of them at the intersection of HPC and ML. The research work, carried out by JLESC researchers from five different institutions, shows a strong interaction between theoretical analysis and practical implementations. The results of this endeavor had led to multiple collaboration visits, dozens of publications and hundreds of citations; but more interestingly, it has opened new questions and it has shown connections between HPC fields that we didn't know were connected before. In this talk we will go over this trajectory and get a quick glance of what might come in the future for HPC resilience.

Developer tools for porting & tuning parallel applications on extreme-scale systems Dr. Brian Wylie
JSC
Abstract:

Application developers targeting extreme-scale HPC systems such as Fugaku, and modular supercomputing architectures such as JUWELS, need effective tools to assist with porting and tuning for these unusual systems. This collaborative project brings together developers of such tools from JLESC partners to investigate their integration and support joint training activities as the tools are deployed and applied to a variety of application codes.

Title Presenter Topic
STM1.1High-Performance SZ lossy compression Implemented in Vivado HLS for Xilinx FPGAsChengming Zhang, ANLAdvanced architectures, I/O, storage and in-situ processing
DataStates: Scalable Data Management for HPC and AIBogdan Nicolae, ANLI/O, storage and in-situ processing, Resilience
ANACIN-X: A workflow for nondeterminism quantificationKae Suarez and Nick Bell, UTKApplications and mini apps, Resilience, Performance tools, Application correctness and nondeterminism
Custom Hardware: Exploring Stream Compressor Designs for X-ray detector ASICs using ChiselKazutomo Yoshii, ANLAdvanced architectures
Chameleon Innovation Platform for Computer Science Research: Phase 4 UpdateKate Keahey, ANLAdvanced architectures, Scientific instruments, testbeds
From task graph to asynchronous distributed checkpointing with local restartRomain Lion, INRIAResilience, Programming languages and runtimes
Spray - Sparse Reductions for ArraysJan Hueckelheim, ANLAdvanced architectures, Programming languages and runtimes
STM2.1PEng4NN: A Performance Estimation Engine for Efficient Neural Network Architecture SearchAriel Rorabaugh, UTKApplications and mini apps, Machine learning
XPSI: X-ray Free Electron Laser-based Protein Structure IdentifierPaula Olaya , UTKApplications and mini apps, Machine Learning
MocCUDA: Running CUDA codes on FugakuJens Domke, R-CCSApplications and mini apps, Programming languages and runtimes
Braid DB: A from-scratch provenance system for AI-driven scienceJustin M Wozniak, ANLI/O, storage and in-situ processing
E2Clab: Optimizing Complex Workflow Deployments on the Edge-to-Cloud Continuum - A case study with Pl@ntNet botanical systemDaniel Rosendo, INRIAPerformance tools, Artificial intelligence and Automatic tuning
Analysis of medical and simulation data for an improved patient treatment in rhinologyMario Rüttgers, JSCNumerical methods, Artificial Intelligence
STM2.2A Novel Memory-Efficient Deep Learning Training Framework via Error-Bounded Lossy CompressionSian Jin, ANLI/O, storage and in-situ processing, Deep learning
End-to-End Performance Optimization for Error-Bounded Lossy Compressor of Scientific Data on GPU-Based HPC SystemsJiannan Tian, ANLI/O, storage and in-situ processing
Dhmem: Shared-Memory Communication for Containerized WorkflowsTanner Hobson, UTKI/O, storage and in-situ processing
Exploring the SZ lossy compressor use for the XIOS I/O serverXavier Yepes Arbós, BSCI/O, storage and in-situ processing
New story about SZ Lossy Compression for Scientific DatasetsSheng Di, ANLApplications and mini apps, I/O, storage and in-situ processing
Storage allocation over hybrid HPC/Cloud InfrastructuresFrançois Tessier, INRIAI/O, storage and in-situ processing
STM2.3An Application of Least Squares Regression in Native Hardware Event RecognitionDaniel Barry, UTKPerformance tools
EuroCC and the Industry Relations Team at JSCKonrad Pausch, JSCTransfer of technology
Measuring hot memory areasAndreas Beckmann, JSCPerformance tools
Accelerating GMRES via Mixed PrecisionNeil Lindquist, UTKNumerical methods
Randomized Algorithms for the Low Rank MatrixApproximationMax Melnichenko, UTKNumerical methods
Verification Method for Eigenvalue Problems without Directed RoundingTakeshi Terao, RikenNumerical methods
STM2.4Completion Notification using MPI ContinuationsJoseph Schuchart, UTKProgramming languages and runtimes
Heterogeneity considered helpful to improve the performance a of a GeoStatistics task-based applicationLucas Nesi, UFRGSApplications and mini apps, Programming languages and runtimes
A Tale of Two Programming-Models: Enhancing Heterogeneity, Productivity and Performance through OmpSs-2 + OpenACC Inter-operationSimon Garcia de Gonzalo, BSCProgramming languages and runtimes
Task Queues to the Rescue! How Queue Design Affects the Performance of Eventify on GPUsLaura Morgenstern, JSCProgramming languages and runtimes
Effective and Efficient Parallelization: Are we there yet?Ivo Kabadshow, JSCProgramming languages and runtimes
Using Performance Attributes for Managing Heterogeneous Memory in HPC ApplicationsAndrès RUBIO PROAÑO, INRIAAdvanced architectures, Programming languages and runtimes
BOS / Organizer Speaker Title

Python in Parallel Computing

Laxmikant Kale
Zane FinkCharm4Py: scaling adaptive runtime support in a productive language
Ryan ChardfuncX for parallel and distributed computing in Python
Robert SpeckPerformance Analysis and Benchmarking for a space-time parallel Python code
Andreas KloecknerHigh-performance code generation for heterogeneous machines in Python
Rosa Badiadilib: parallel ML with PyCOMPSs
Morris RiedelPython Machine Learning Example Projects using the Modular Supercomputing Architecture (MSA)

ARM

Mitsuhisa Sato
Yuetsu KodamaEvaluation of Power Consumption and Parallel Performance in Fugaku
Miquel MoretoEarly evaluation experience of the FX1000 installation at BSC
Bine BrankEvaluation of the status of compiler auto-vectorisation for SVE
Nam HoExploring memory pre-fetching for Arm-based processors
Dong ZhongUsing Arm Scalable Vector Extension for MPI
Open discussion

Heterogeneous and reconfigurable architectures for the future of computing

Kentaro Sano, Kazutomo Yoshii, Xavier Martorell
Opening
Jens HuthmannFPGA-based application on RIKEN’s FPGA Cluster: ESSPER
Mohamed E. AlyMICRO-GAGE: A Low-power Compact GAGE Hash Function Processor for IoT Applications
Sitao HuangPyLog: An Algorithm-Centric Python-Based FPGA Programming and Synthesis Flow
Carlos Alvarez “New OmpSs@FPGA developments for HPC”
Open Discussion

Plenary Talks

Title Presenter
Fugaku: the first 'Exascale' supercomputer Prof. Satoshi Matsuoka
Riken CCS
Abstract:

Fugaku is the first ‘exascale’ supercomputer of the world, not due to its peak double precision flops, but rather, its demonstrated performance in real applications that were expected of exascale machines on their conceptions 10 years ago, as well as reaching actual exaflops in new breed of benchmarks such as HPL-AI. But the importance of Fugaku is \"applications first\” philosophy under which it was developed, and its resulting mission to be the centerpiece for rapid realization of the so-called Japanese ‘Society 5.0’ as defined by the Japanese S&T national policy. As such, Fugaku’s immense power is directly applicable not only to traditional scientific simulation applications, but can be a target of Society 5.0 applications that encompasses conversion of HPC & AI & Big Data as well as Cyber (IDC & Network) vs. Physical (IoT) space, with immediate societal impact with its technologies utilized as Cloud resources. In fact, Fugaku is already in partial operation a year ahead of schedule, primarily to obtain early Society 5.0 results including combatting COVID-19 as well as resolving other important societal issues and also go into full production in moments time.

How a Community Software Ecosystem Perspective Helps to Advance Science Goals in the Exascale Computing ProjectDr. Lois Curfman McInnes
Argonne National Laboratory
Abstract:

Teams in the U.S. Exascale Computing Project (ECP) are working toward scientific advances on forthcoming exascale platforms, across a diverse suite of applications in chemistry, materials, energy, Earth and space science, data analytics, optimization, artificial intelligence, and national security. In turn, these applications build on software components, including programming models and runtimes, mathematical libraries, data and visualization packages, and development tools that comprise the Extreme-scale Scientific Software Stack (E4S). E4S represents a portfolio-driven effort to collect, test, and deliver the latest in reusable open-source HPC software products, as driven by the common needs of applications. E4S establishes product quality expectations and provides a portal as a starting point for access to product documentation. This presentation will discuss early experiences with how this software ecosystem approach delivers the latest advances from ECP software technology projects to applications, thereby helping to overcome software collaboration challenges across distributed aggregate teams. A key lesson learned is the need for close collaboration between teams developing applications and reusable software technologies, as well as the need for crosscutting strategies to increase developer productivity and software sustainability, thereby mitigating technical risks by building a firmer foundation for reproducible, sustainable science.

High-Performance Deep Learning Prof. Torsten Hoefler
ETH Zurich
Abstract:

Deep Learning is as computationally expensive as the most challenging scientific computing applications. In this talk, we outline the biggest challenges in training deep learning workloads and show how HPC techniques can be used to improve the performance of training workloads. We focus on model sparsity in the training process. This will be even more important once the scientific computing community uses deep learning in their workflows.

A Story About Data: Advancing Storage, I/O and Processing at Challenging Scales Dr. Gabriel Antoniu
INRIA
Abstract:

Looking back over more than 10 years of collaboration within JLESC involving Inria, the University of Illinois at Urbana-Champaign and Argonne National Lab, this talk will highlight a few achievements on hot topics related to data storage, I/O management and in situ visualisation and processing. From these initial challenges in this areas posed by the expected arrival of Exascale systems, new ones emerged as frontiers started to blur between High-Performance Computing and Big Data analytics. We will also discuss upcoming open problems triggered by the increasingly complex workflows that are mixing simulations, analytics and AI, which emphasize new requirements and opportunities created by their potential execution on the HPC/Cloud/Edge computing continuum.

Resilience for Extreme Scale Computing Dr. Leo Bautista
BSC
Abstract:

Resilience has been one of the main research topics of the JLESC since its conception over a decade ago. We have covered multiple types of failures and errors which has led to completely different fault tolerance techniques, some of them at the intersection of HPC and ML. The research work, carried out by JLESC researchers from five different institutions, shows a strong interaction between theoretical analysis and practical implementations. The results of this endeavor had led to multiple collaboration visits, dozens of publications and hundreds of citations; but more interestingly, it has opened new questions and it has shown connections between HPC fields that we didn't know were connected before. In this talk we will go over this trajectory and get a quick glance of what might come in the future for HPC resilience.

Developer tools for porting & tuning parallel applications on extreme-scale systems Dr. Brian Wylie
JSC
Abstract:

Application developers targeting extreme-scale HPC systems such as Fugaku, and modular supercomputing architectures such as JUWELS, need effective tools to assist with porting and tuning for these unusual systems. This collaborative project brings together developers of such tools from JLESC partners to investigate their integration and support joint training activities as the tools are deployed and applied to a variety of application codes.

Short Talks

Title Presenter Topic
STM1.1High-Performance SZ lossy compression Implemented in Vivado HLS for Xilinx FPGAsChengming Zhang, ANLAdvanced architectures, I/O, storage and in-situ processing
DataStates: Scalable Data Management for HPC and AIBogdan Nicolae, ANLI/O, storage and in-situ processing, Resilience
ANACIN-X: A workflow for nondeterminism quantificationKae Suarez and Nick Bell, UTKApplications and mini apps, Resilience, Performance tools, Application correctness and nondeterminism
Custom Hardware: Exploring Stream Compressor Designs for X-ray detector ASICs using ChiselKazutomo Yoshii, ANLAdvanced architectures
Chameleon Innovation Platform for Computer Science Research: Phase 4 UpdateKate Keahey, ANLAdvanced architectures, Scientific instruments, testbeds
From task graph to asynchronous distributed checkpointing with local restartRomain Lion, INRIAResilience, Programming languages and runtimes
Spray - Sparse Reductions for ArraysJan Hueckelheim, ANLAdvanced architectures, Programming languages and runtimes
STM2.1PEng4NN: A Performance Estimation Engine for Efficient Neural Network Architecture SearchAriel Rorabaugh, UTKApplications and mini apps, Machine learning
XPSI: X-ray Free Electron Laser-based Protein Structure IdentifierPaula Olaya , UTKApplications and mini apps, Machine Learning
MocCUDA: Running CUDA codes on FugakuJens Domke, R-CCSApplications and mini apps, Programming languages and runtimes
Braid DB: A from-scratch provenance system for AI-driven scienceJustin M Wozniak, ANLI/O, storage and in-situ processing
E2Clab: Optimizing Complex Workflow Deployments on the Edge-to-Cloud Continuum - A case study with Pl@ntNet botanical systemDaniel Rosendo, INRIAPerformance tools, Artificial intelligence and Automatic tuning
Analysis of medical and simulation data for an improved patient treatment in rhinologyMario Rüttgers, JSCNumerical methods, Artificial Intelligence
STM2.2A Novel Memory-Efficient Deep Learning Training Framework via Error-Bounded Lossy CompressionSian Jin, ANLI/O, storage and in-situ processing, Deep learning
End-to-End Performance Optimization for Error-Bounded Lossy Compressor of Scientific Data on GPU-Based HPC SystemsJiannan Tian, ANLI/O, storage and in-situ processing
Dhmem: Shared-Memory Communication for Containerized WorkflowsTanner Hobson, UTKI/O, storage and in-situ processing
Exploring the SZ lossy compressor use for the XIOS I/O serverXavier Yepes Arbós, BSCI/O, storage and in-situ processing
New story about SZ Lossy Compression for Scientific DatasetsSheng Di, ANLApplications and mini apps, I/O, storage and in-situ processing
Storage allocation over hybrid HPC/Cloud InfrastructuresFrançois Tessier, INRIAI/O, storage and in-situ processing
STM2.3An Application of Least Squares Regression in Native Hardware Event RecognitionDaniel Barry, UTKPerformance tools
EuroCC and the Industry Relations Team at JSCKonrad Pausch, JSCTransfer of technology
Measuring hot memory areasAndreas Beckmann, JSCPerformance tools
Accelerating GMRES via Mixed PrecisionNeil Lindquist, UTKNumerical methods
Randomized Algorithms for the Low Rank MatrixApproximationMax Melnichenko, UTKNumerical methods
Verification Method for Eigenvalue Problems without Directed RoundingTakeshi Terao, RikenNumerical methods
STM2.4Completion Notification using MPI ContinuationsJoseph Schuchart, UTKProgramming languages and runtimes
Heterogeneity considered helpful to improve the performance a of a GeoStatistics task-based applicationLucas Nesi, UFRGSApplications and mini apps, Programming languages and runtimes
A Tale of Two Programming-Models: Enhancing Heterogeneity, Productivity and Performance through OmpSs-2 + OpenACC Inter-operationSimon Garcia de Gonzalo, BSCProgramming languages and runtimes
Task Queues to the Rescue! How Queue Design Affects the Performance of Eventify on GPUsLaura Morgenstern, JSCProgramming languages and runtimes
Effective and Efficient Parallelization: Are we there yet?Ivo Kabadshow, JSCProgramming languages and runtimes
Using Performance Attributes for Managing Heterogeneous Memory in HPC ApplicationsAndrès RUBIO PROAÑO, INRIAAdvanced architectures, Programming languages and runtimes

Code of conduct

The organizers of the 12th JLESC Workshop are dedicated to providing a harassment-free experience for everyone, regardless of gender, gender identity and expression, age, sexual orientation, disability, physical appearance, body size, race, ethnicity, religion (or lack thereof), technology choices, or other group status.

To make clear what is expected, everyone taking part in the event - speakers, helpers, organizers, and participants - is required to conform to the Berlin Code of Conduct. The full text of the Code of Conduct can be found at http://berlincodeofconduct.org/.

To give a brief overview here, you are expected to:

  • participate in an authentic and active way. In doing so, you contribute to the health and longevity of this community
  • exercise consideration and respect in your speech and actions
  • attempt collaboration before conflict
  • refrain from demeaning, discriminatory, or harassing behavior and speech
  • be mindful of your surroundings and of your fellow participants

The following behavior is unacceptable: intimidating, harassing, abusive, discriminatory, derogatory or demeaning speech or actions by any participant in our community online, at all related events and in one-on-one communications carried out in the context of community business.

Harassment includes harmful or prejudicial verbal or written comments related to gender, sexual orientation, race, religion, disability; inappropriate use of nudity and/or sexual images (including presentation slides); inappropriate depictions of violence (including presentation slides); deliberate intimidation, stalking or following; harassing photography or recording; sustained disruption of talks or other events.

If you witness or are subject to unacceptable behavior, please contact one of the workshop organizers via email or Slack. You can do so anonymously.

Organizers 

InriaNCSAUniversity of IllinoisArgonne National LaboratoryBarcelona Supercompting CenterForschungszentrum JülichRIKEN Center for Computational ScienceInnovative Computing LaboratoryUniversity of Tennessee