The 12th Scheduling for Large Scale Systems Workshop will be held at The University of Tennessee in Knoxille Tennessee, May 24-May 26 2017. This will be the 12th edition of this workshop series after Aussois (2004), San Diego (2005), Aussois (2008), Knoxville (2009), Aussois (2010 and 2011), Pittsburgh (2012), Dagstuhl (2013), Lyon (2014), Dagsthul (2015) and Nashville (2016). As in the previous editions, the workshop will be structured as a set of thematic half-day sessions, focused on topics related to scheduling and algorithms for large-scale systems. Presentations (about 25 minutes each) are only one part of the program, dedicated sessions for informal discussions and exchanges will complement the presentations. We strongly encourage the participants to break up in smaller groups based on shared interest and tackle challenging problems.

The workshop is by invitation only and there will be no registration fee. A block of rooms has been reserved for this event, the booking is on a first-come first-serve basis as indicated in the Hotels section.

Last but not least, a special thanks to Leighanne Sisk, Tracy Rafferty and Teresa Finchum for their help with some of the organizational aspects of this workshop.


Program

The program will be updated as more information becomes available. Click a date for more details. Click a line for even more details.

Wednesday, May 24 (9:00AM - 5:30PM)
8:15AM Breakfast Room #205
9:00AM Description of the event Bosilca, George
Miscellaneous information
9:30AM A Catalog of Faults, Errors, and Failures in Extreme-Scale Systems Engelmann, Christian
Building a reliable supercomputer that achieves the expected performance within a given cost budget and providing efficiency and correctness during operation in the presence of faults, errors, and failures requires a full understanding of the resilience problem. The Catalog project develops a fault taxonomy, catalog and models that capture the observed and inferred conditions in current supercomputers and extrapolates this knowledge to future-generation systems. To date, the Catalog project has analyzed billions of node hours of system logs from supercomputers at Oak Ridge National Laboratory and Argonne National Laboratory. This talk provides an overview of our findings and lessons learned.
10:00AM Coffee Break
10:30AM Dataflow Programming Paradigms for Computational Chemistry Methods Jagode, Heike
The transition to multicore and heterogeneous architectures has shaped the High Performance Computing (HPC) landscape over the past decades. With the increase in scale, complexity, and heterogeneity of modern HPC platforms, one of the grim challenges for traditional programming models is to sustain the expected performance at scale. By contrast, dataflow programming models have been growing in popularity as a means to deliver a good balance between performance and portability in the post-petascale era. This work introduces dataflow programming models for computational chemistry methods, and compares different dataflow executions in terms of programmability, resource utilization, and scalability.
This effort is driven by computational chemistry applications, considering that they comprise one of the driving forces of HPC. In particular, many-body methods, such as Coupled Cluster methods (CC), which are the "gold standard" to compute energies in quantum chemistry, are of particular interest for the applied chemistry community. On that account, the latest development for CC methods is used as the primary vehicle for this research, but our effort is not limited to CC and can be applied across other application domains.
We present two programming paradigms for expressing CC methods into a dataflow form in order to make them capable of utilizing task scheduling systems. Explicit dataflow, is the programming model where the dataflow is explicitly specified by the developer, is contrasted with implicit dataflow, where a task scheduling runtime derives the dataflow. We present a thorough performance evaluation and demonstrate that the utilization of dataflow-based execution for CC methods enables more efficient and scalable computation.
11:00AM List-Scheduling vs. Cluster-Scheduling -- Fight! Oliver Sinnen
In scheduling theory and parallel computing practice, programs are often represented as directed acyclic graphs. Finding a makespan-minimising schedule for such a graph on a given number of homogenous processors (P|prec,c_{ij}| C_{max}) is an NP-hard optimisation problem for which many heuristics have been proposed. The two dominant algorithmic approaches are list-scheduling and cluster-scheduling (based on clustering), whereby clustering targets an unlimited number of processors at its core. Given their heuristic nature, many comparisons based on simulations and experiments evaluated their relative performance. However, the overwhelming majority of these evaluations compares algorithms within but not across categories. As a consequence it is not clear how cluster-scheduling, for a limited number of processors, performs relative to list scheduling or how list scheduling, for an unlimited number of processors, performs against clustering. This study addresses these open questions by comparing a large set of representative algorithms from the two approaches in an extensive experimental evaluation on a large set of graphs with many different structures and parameters. The algorithms are discussed and studied in a modular nature, categorizing algorithms into components. Some of the included algorithms are previously unpublished combinations of these modular techniques. This approach also permits to study the merit of techniques like task insertion or lookahead by comparing algorithms that only differ in that aspect. The evaluation results show that simple low-complexity algorithms are surprisingly competitive and that more sophisticated algorithms only exhibit their strengths under certain conditions.
11:30AM Reducing communication and pipelining global collectives in GMRES Ichitaro Yamazaki
We compare the performance of s-step and pipelined GMRES on distributed-multicore CPUs. For this study, we implemented the thread-parallelism and communication-overlap in two different ways. The first uses nonblocking MPI collectives and thread-parallel computational kernels. The second relies on a shared-memory task scheduler QUARK. In our experiments, the first implementation seems to pipeline the global collectives better, while the second obtained the higher efficiency of the thread-parallelism. In addition, when we combined these two techniques, we could improve the performed of pipelined GMRES by factors of up to 1.67x, and it performed up to 1.22x better than s-step GMRES though we only used 50 nodes for our study.
12:00PM Lunch Break
Food
1:00PM Discussions
To be announced
3:00PM Checkpointing workflows for fail-stop errors Yves Robert
At last an algorithm that goes beyond linear chains and application-specific problems!
Joint work with Louis-Claude Canon, Henri Casanova, Li Han and Frédéric Vivien
3:30PM Identifying the right replication level to detect and correct silent errors at scale Hongyang Sun
This work provides a model and an analytical study of replication as a technique to detect and correct silent errors. Although other detection techniques exist for HPC applications, based on algorithms (ABFT), invariant preservation or data analytics, replication remains the most transparent and least intrusive technique. I will discuss the right level (duplication, triplication or more) of replication needed to efficiently detect and correct silent errors. Replication is combined with checkpointing and comes with two flavors: process replication and group replication. In both scenarios, results are compared before each checkpoint, which is taken only when both results (duplication) or two out of three results (triplication) coincide. If not, one or more silent errors have been detected, and the application rolls back to the last checkpoint. We provide a detailed analytical study of both scenarios, with formulas to decide, for each scenario, the optimal parameters as a function of the error rate, checkpoint cost, and platform size. We also report a set of extensive simulation results that corroborates the analytical model.
4:00PM Enabling Hierarchical Scheduling in Next-Generation HPC Systems with Flux Tapasya Patki
The landscape of supercomputing systems is changing as we move from the petascale era to the exascale era. Scientific simulations are becoming more complex and resources on shared clusters are becoming more diverse than ever. Simultaneously managing massive amounts of data along with multiple resources such as power, network and I/O bandwidth at large scales poses several scheduling optimization challenges. Despite this, the resource management and job scheduling software that is used in production supercomputers remains stuck with the old centralized approaches.
In this talk, I will address the three key challenges in resource management for next-generation supercomputers: scalability, flexibility for flow resources (such as power, network and I/O), and fault tolerance. I will present Flux, a next-generation production resource manager under development at Lawrence Livermore National Laboratory, which is based on a fully hierarchical scheduling model. Flux provides a framework for addressing the aforementioned challenges, and looks beyond traditional approaches that have focused primarily on using centralized models and static resource allocations.
4:30PM Performance considered Helpful Amina Guermouche
Understanding the performance of a multi-threaded application is difficult. The threads interfere when they access the same hardware resource, which slows down their execution. Unfortunately, current profiling tools are unable to identify the most problematic interference, because they cannot classify interference on different hardware resources. In this paper, we propose a holistic metric able to simultaneously classify interference on different hardware resources. The metric considers performance variation as a universal indicator. We propose a profiling toolchain to automatically computes this metric. With an evaluation of 27 applications we show that our profiling toolchain can identify interference caused by 6 different kinds of thread interaction in 9 applications. We are able to easily remove 7 performance bottlenecks in 6 of them, which leads to a performance improvement of up to 8.87 times.
5:00PM Parallel Online Scheduling with Deadlines and Slack Uwe Schwiegelshohn
We address a basic online scheduling problem on parallel identical machines: jobs arrive over time and we must decide immediately and irrevocably whether to accept or to reject the actual job. We have the goal to maximize our machine usage while completing every accepted job before or at its deadline. The deadline of a job is not earlier than (1+ε)∙pj time units after the submission time of the job with pj being the processing time of the job and ε>0 being a fixed and typically small slack parameter. The use of the slack parameter is a common approach to divide the problem space for scheduling problems with deadlines.
The problem corresponds to the business model of an IaaS cloud provider who wants to maximize the profit while honoring the service contract with the users. In our basic problem, all machines are equal, there is neither a discount nor a surcharge for any specific use of a machine, and there are no preferred users. Therefore, the profit is equivalent with the total machine usage. Since the cloud provider maintains control of the machines, he or she can select the value of the slack parameter to increase the scheduling flexibility while keeping the impact for the customer as small as possible.
We consider problems with and without migration. In cloud computing, migration enables the provider to handle unexpected resource failure without significant impact to the customer. We discuss the impact of migration to the performance of the system.
We apply competitive analysis to the problems and give new lower bounds. Further, we present an algorithm with migration that has an almost tight competitive factor. For the non-preemptive case, we show that the previously published algorithm by Lee does not guarantee the claimed competitive factor and use our preemptive results to derive a new algorithm that improves the competitive factor of the single machine case.
6:30PM Social Event (BBM and parking)
Thursday, May 25 (9:00AM - 5:30PM)
8:15AM Breakfast Room #205
9:00AM Cheap and Guaranteed Scheduling Algorithms for Independent Tasks on Heterogeneous Resources Olivier Beaumont
9:30AM Network scheduling techniques that guarantee isolation in shared clouds Ana Gainaru
The talk focuses on introducing a new abstraction for cloud service that provides isolation of network links. Interference with other tenants may causes a performance degradation in cloud applications that may exceed 65%. The talk presents a cloud allocation and routing technology created by the architecture team at Mellanox that provides each tenant with the same bandwidth as in its own private data center. The results show that our method completely eliminates the application performance degradation and can be easily used in clouds today without any change of hardware.
10:00AM Coffee Break
10:30AM Harvesting Underutilized Resources to Improve Responsiveness and Tolerance to Crash and Silent Faults for Data-intensive Applications Taieb Znati
The talk will focus on a new data-centric computational model to improve responsiveness of the data-intensive applications to crash faults, and augment its ability to deal with silent errors to ensure computational accuracy. The basic tenet of the model, is a task replication scheme, which interweaves the processing of a replicated data split among multiple distributed tasks, with each task consuming data at a different offset. In the absence of a failure, the concurrent execution of the tasks ensures complete processing of the data split, with a significant reduction in the total execution time. In case of error, however, the remaining tasks take over the execution of the unfinished work and finish on time. The proposed scheme also guarantees timely detection and correction of silent data corruptions along with crash faults. To demonstrate the effectiveness of the scheme, we extend Hadoop’s MapReduce code base to deal with silent and crash failures. Results show a performance improvement of 50% and 33% over Hadoop’s Speculative Execution when dealing with crash-faults and silent errors, respectively.
11:00AM Bringing Dynamicity to Resource Management Pierre Lemarinier
Future supercomputers will need to support both traditional HPC applications and Big Data/High Performance Analysis applications seamlessly in a common environment. This motivates traditional job scheduling systems to support malleable jobs along with allocations that can dynamically change in size, in order to adapt the amount of resources to the actual current need of the different applications. It also calls for future innovative HPC applications to adapt to this environment, and provide some level of malleability for releasing under-utilised resources to other tasks. In this presentation, I will expose the results of our research effort undertook for the Human Brain PCP project, and the new features integrated in IBM Spectrum LSF that resulted from this effort.
11:30AM Is Acyclic Directed Graph Partitioning Effective for Locality-Aware Scheduling Julien Herrmann
We investigate how finding a good partition of a computational directed acyclic graph associated with an algorithm can help finding an execution pattern improving data locality. The partition is required to be acyclic, i.e., the inter-part edges between the vertices from different parts should preserve an acyclic dependency structure among the parts. Furthermore, if could partition, by limiting the part sizes, such that every part can be executed within the cache and the total volume of communication between the parts is small, we can derive a global execution schedule with minimal amount of cache-miss. In this work, we adopt the multilevel approach with coarsening, initial partitioning, and refinement phases for acyclic partitioning of directed acyclic graphs and develop a recursive bisection scheme. To ensure the acyclicity of the partition at all times, we propose novel and efficient agglomerative coarsening and refinement heuristics. We, then, use our acyclic partitioner to find meaningful partition for locality-aware scheduling. We investigate different strategies for limiting part size and different scheduling technics on large graphs arising from linear algebra applications.
12:00PM Lunch Break
Food
1:00PM Discussions
3:00PM Resource and Performance Optimization of Big Data Scientific Workflows in Distributed Heterogeneous Network Environments Yi Gu
Increasingly, scientific knowledge is discovered computationally. For many disciplines, complex computational workflows are needed to convert large volumes of observational, experimental and simulation data to some useful information. These workflows can now be conducted on a distributed heterogeneous network environment, such as a cloud-based platform, where dynamically allocated virtual machine resources are utilized. Because cloud providers can quickly provision virtually unlimited resources, the traditional workflow mapping problem (that assigns tasks to a fixed set of hardware resources) has become a resource optimization problem. The new assignment problem starts with a catalog of potential instance types and selects the appropriate cloud virtual machine instance for each task in the workflow, given an objective to minimize associated total costs. The effectiveness of the proposed solution is evaluated by an extensive set of simulation results as well as a real workflow execution and visualization tool Breeze developed by ourselves.
3:30PM Towards highly scalable Ab Initio Molecular Dynamics (AIMD) simulations on the Intel Knights Landing manycore processor Mathias Jacquelin
The Ab Initio Molecular Dynamics (AIMD) method allows scientists to treat the dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. This extremely important method has tremendous computational requirements, because the electronic Schrodinger equation, approximated using Kohn-Sham Density Functional Theory (DFT),is solved at every time step.
With the advent of manycore architectures, application developers have a significant amount of processing power within each compute node that can only be exploited through massive parallelism. A compute intensive application such as AIMD forms a good candidate to leverage this processing power.
In this work, we focus on adding thread level parallelism to the plane wave DFT methodology implemented in NWChem. Through a careful optimization of tall-skinny matrix products, which are at the heart of the Lagrange multiplier and non-local pseudopotential kernels, as well as 3D FFTs, our OpenMP implementation delivers excellent strong scaling on the latest Intel Knights Landing (KNL) processor. We assess the efficiency of our Lagrange multiplier kernels by building a Roofline model of the platform, and verify that our implementation is close to the roofline for various problem sizes. Finally, we present strong scaling results on the complete AIMD simulation for a 64 water molecules test case, that scales up to all 68 cores of the Knights Landing processor.
4:00PM Parallel Evolutionary Optimizations for Neuromorphic Computing Systems Jim Plank
Our research group at Tennessee and ORNL focuses on a wholistic research program in the area of Neuromorphic Computing. Specifically, we focus on applications that can leverage brain-inspired computing devices, programming methodologies for these devices, and of course the architectures and devices themselves. In this talk, I will outline our research agenda and recent results. Then, I will focus on parallel evolutionary optimization for programming these devices, and some performance results from ORNL’s Titan supercomputer.
4:30PM Autonomic Data Management for In-Situ Scientific Workflows Manish Parashar
Data staging and in-situ/in-transit data processing are emerging as attractive approaches for supporting extreme scale scientific workflows. These approaches can improve end-to-end performance by enabling efficient data sharing between coupled simulations and data analytics components of an in-situ workflow. However, complex and dynamic data access/exchange patterns coupled with architectural trends toward smaller memory per core and deeper memory hierarchies threaten to impact the effectiveness of this approach. In this talk, I will explore a policy-based autonomic data management approach that can adaptively respond at runtime to dynamic data management requirements. Specifically, I will formulate the autonomic data management approach and present the design and implementation of autonomic policies as well as cross layer mechanisms, and will experimentally demonstrate how these autonomic adaptations can tune the application behaviors and resource allocations at runtime while meeting the data management requirements and constraints. This research is part of the DataSpaces project at the Rutgers Discovery Informatics Institute.
5:00PM Failure independence: Are we right to be wrong? Frederic Vivien
Most work on resilience assumes that failures are not temporally correlated. This assumption is necessary to enable any theoretical study. However, this assumption may not be valid as there are strong reasons to believe that machines can be the subject of cascade failures. A cascade failure is a sequence of correlated failures happening in a short time-span. In practice, are there cascade failures in failure traces? If so, should we adapt the resilience techniques to take cascade failures into account or were we right to be wrong by simply ignoring them?
6:30PM Social Event (Salsarita's)
Friday, May 26 (9:00AM - 5:30PM)
8:15AM Breakfast Room #205
9:00AM A Resource Management Proposition for Applications, Tools and Services on Extreme Scale Platforms Dorian Arnold
Effective software execution in large, complex HPC environments requires a detailed understanding of a software system's computation and communication and how to best map these workloads to underlying physical resources. While these complex environments are already difficult to program statically, workloads offered by applications, tools and other system services typically vary throughout their makespans. Such dynamism renders new challenges for efficient and effective resource management. This talk presents the ideas of a framework for introspective extreme scale tools and applications, FIESTA. With this framework, we explore middleware-level support for autonomous tool and application resource management. The long term outlook is a comprehensive framework for the design, development and deployment of extreme scale software systems. The key innovations include: (1) self-monitoring of dynamic environmental characteristics including resource availability and workload resource demands; (2) self-detection of functional and performance problems; (3) decision processing for evaluating corrective actions; and (4) instantiation of decided actions. For dynamic resource management, FIESTA leverages a new resource management paradigm in which a software systems can readily and responsively expand and shrink resource allocations throughout their lifetime.
9:30AM Scheduling Messages in MPI Implementations: Strong vs. Weak Progress, Overlap, Models, and Jitter Anthony Skjellum
In the 25 years since the first MPI standard, there have been many highly successful implementations of MPI. Based on the standard's "progress rule," some implementations (notably the most popular open, free implementations) have chosen to provide polling progress, while others have provided strong progress, independent of the user threads' subsequent calls to MPI. In certain cases, polling progress vs. strong progress behavior depends also on the underlying data mover used with a given MPI configuration. Nowadays, most commercial deployments base on these open implementations (Open MPI and MPICH), and most many share message scheduling properties as these middleware. Message completion notification can also be of a polling or non-polling nature, with implications for the scheduling of the user's processes/threads and MPI, and the potential for overlap of communication, communication, computation, and I/O. In this talk, we cover options for message scheduling within MPI, and certain experience over the past quarter century with designs that are both polling and strong progress oriented. We mention a taxonomy for MPI architectures based on progress and notification first offered by Dimitrov in 2001. We indicate the rationale for progress choices from the literature. Moving forward, with many threads in single cores, and many cores sharing single address spaces, plus strong concern of jitter and overlap of communication and computation at pre-exascale, these issues are magnifying in importance. Getting to scheduling architectures within MPI that support strong progress and allow for differentiated strategies for moving short and long messages is important. Considering how to annotate the APIs to suggest when different behavior is desired is also mentioned in the talk (as possible hints or similarly). Revisiting how previous, successful strongly progressive MPI implementations have worked and why is timely and is covered too.

10:00AM Coffee Break
10:30AM Bidiagonalization and R-Bidiagonalization: Parallel Tiled Algorithms, Critical Paths and Distributed-Memory Implementation. Mathieu Faverge
We study tiled algorithms for going from a “full” matrix to a condensed “band bidiagonal” form using orthogonal transformations: (i) the tiled bidiagonalization algorithm BiDiag, which is a tiled version of the standard scalar bidiagonalization algorithm; and (ii) the R-bidiagonalization algorithm R-BiDiag, which is a tiled version of the algorithm which consists in first performing the QR factorization of the initial matrix, then performing the band-bidiagonalization of the R-factor. For both BiDiag and R-BiDiag, we use four main types of reduction trees, namely FlatTS, FlatTT, Greedy, and a newly introduced auto-adaptive tree, Auto. We provide a study of critical path lengths for these tiled algorithms, which shows that (i) R-BiDiag has a shorter critical path length than BiDiag for tall and skinny matrices, and (ii) Greedy based schemes are much better than earlier proposed algorithms with unbounded resources. We provide experiments on a single multicore node, and on a few multicore nodes of a parallel distributed shared-memory system, to show the superiority of the new algorithms on a variety of matrix sizes, matrix shapes and core counts.
11:00AM Parallel Space-Time Kernel Density Estimation Erik Saule
The exponential growth of available data has increased the need for interactive exploratory analysis. Dataset can no longer be understood through manual crawling and simple statistics. In Geographical Information Systems (GIS), the dataset is often composed of events localized in space and time; and visualizing such a dataset involves building a map of where the events occurred.
We focus in this paper on events that are localized among three dimensions (latitude, longitude, and time), and on computing the first step of the visualization pipeline, space-time kernel density estimation (STKDE), which is most computationally expensive. Starting from a gold standard implementation, we show how algorithm design and engineering, parallel decomposition, and scheduling can be applied to bring near real-time computing to space-time kernel density estimation. We validate our techniques on real world datasets extracted from infectious disease, social media, and ornithology.
11:30AM Lunch Break
Food
1:00PM Discussions
Saturday, May 27 (8:30AM - 5:00PM)
8:30AMTrip to the mountains around Knoxville hike
The planned hike starts at 10AM. It is on the Cumberland Trail above La Folette, north of Knoxville.

Participants

Organizing Committee

  1. George Bosilca, Innovative Computing Laboratory, University of Tennessee Knoxville, USA
  2. Aurélien Bouteiller, Innovative Computing Laboratory, University of Tennessee Knoxville, USA
  3. Damien Genet, Innovative Computing Laboratory, University of Tennessee Knoxville, USA
  4. Thomas Hérault, Innovative Computing Laboratory, University of Tennessee Knoxville, USA
  5. Yves Robert, ENS Lyon, France
  6. Jack Dongarra, Innovative Computing Laboratory, University of Tennessee Knoxville, USA

Tentative list of attendees

  1. Dorian Arnold, University of New Mexico, USA
  2. Olivier Beaumont, INRIA Bordeaux, France
  3. Christian Engelmann, Oak Ridge National Laboratory, USA
  4. Mathieu Faverge, Bordeaux INP, Talence, France
  5. Kurt Ferreira, Sandia National Laboratory, USA
  6. Ana Gainaru, Mellanox, Knoxville, USA
  7. Yi Gu, Middle Tennessee State University, USA
  8. Amina Guermouche, ParisSud Telecom, France;
  9. Heike Jagode, ICL, University of Tennessee Knoxville, USA
  10. Julien Herrmann, Georgia Tech, Saint-Étienne USA
  11. Mathias Jacquelin, Lawrence Berkeley National Lab, USA
  12. Pierre Lemarinier, IBM Dublin, Ireland
  13. Rami Melhem, University of Pittsburgh, USA
  14. Esmond Ng, Lawrence Berkeley National Lab, USA
  15. Manish Parashar, Rutgers University, USA
  16. Tapasya Patki, Lawrence Livermore National Lab, USA
  17. Jim Plank, University of Tennessee, Knoxville, USA
  18. Erik Saule, University of North Carolina Charlotte, USA
  19. Uwe Schwiegelshohn, TU Dortmund University, Germany
  20. Anthony Skjellum, Auburn University, USA
  21. Oliver Sinnen, Auckland University, New Zealand
  22. Hongyang Sun, Vanderbilt University, USA
  23. Bora Ucar, CNRS, ENS Lyon, France
  24. Frédéric Vivien, INRIA, ENS Lyon, France
  25. Ichitaro Yamazaki, ICL, University of Tennessee Knoxville, USA
  26. Taieb Znati, University of Pittsburgh, USA

Venue

The workshop will be help at the Innovative Computing Laboratory in the Amphitheater #205-206 of the Claxton Building, on the main campus of the University of Tennessee, Knoxville. A PDF containing directions to ICL and parking information is available.

It must be noted that the workshop period coincide with The World’s Largest Celebration of Creativity event, Destination Imagination, an international meeting held in Knoxville, TN, where the University grounds (and most of the hotels in the vicinity) are flooded with young and passioned students and their supportive parents.


Hotels

Knoxville, one of the gateways to the Smoky Mountains is a busy city in the spring. The 12th Scheduling workshop coincide with The World’s Largest Celebration of Creativity event, Destination Imagination, an international youth meeting held in Knoxville, TN. The University grounds (and most of the hotels in the vicinity) will be flooded with young and passioned students and their supportive parents.

To accomodate the workshop attendees we have reserved a block of rooms from May 22 to May 27 at Extended Stay America, 214 Langley Pl, Knoxville, TN 37922, at a nightly price of $49.49 per room. These rooms are blocked until end of December 2016 and will go on a first-come first-serve basis (use code SCHED17 during booking). Once this reservation block is depleted, you will have to find a booking on your own.

To make your booking please contact: Autumn Shaw, Extended Stay of America Hotels by email at ashaw at extendedstay.com or by phone at +1 (517) 881-1207, and mention the workshop reference SCHED17.

While the hotel is a little further away from the meeting location, it was the only location that accepted to make arrangements for such a large number of rooms. The participants are strongly encouraged to rent a car, arrange for carpooling with fellow attendees, use Uber, or, as a last resort, get in touch with the organizers for help with the moving in the city limits.


Sponsors