6th POP Webinar - Impact of Sequential Performance on Parallel Codes

Wednesday 28 March 2018 - 14:00 BST | 15:00 CEST

The analysis of parallel programs often focuses on the concurrency achieved, the impact of data transfers, or the dependences between computations in different processes. In the previous POP Webinar #3 we looked at how to characterize such aspects of application behaviour and provide insight on the fundamental causes of parallel efficiency loss. The total performance and scaling behaviour of applications depends not only on its parallel efficiency, but also on how the sequential computations between calls to the runtime change with increased core count.

This webinar will first present a model characterizing the evolution of the performance of the user level code between calls to the runtime (MPI or OpenMP). We will show a methodology based on three metrics derived from hardware counter measurements that gives a global view of the scaling of sequential computations and how it impacts the overall application scaling. The webinar will then show how advanced analytics can be used to identify the regions in the code with different sequential cost and performance, how it evolves when scaling core counts. Finally, we will show some analyses examples of the insight that can be obtained with these techniques.

In this 30-minute live webinar we will present:

  • Discuss the impact of sequential performance in parallel codes
  • Present three metrics derived from hardware counter measurements
  • Introduce some advanced analytics
  • Show examples of the insight this analysis provides

REGISTER HERE

About the Presenter

Jesus Labarta is full professor of Computer Architecture at the Technical University of Catalonia (UPC) since 1990.
Since 2005, he has been responsible for the Computer Science Research Department within the Barcelona Supercomputing Center (BSC). His major directions of current work relate to performance analysis tools, programming models and resource management. His team distributes the Open Source BSC tools (Paraver and Dimemas) and performs research on increasing the intelligence embedded in the performance analysis tools. He is involved in the development of the OmpSs programming model and its different implementations for SMP, GPUs and cluster platforms.