6th POP Webinar - Impact of Sequential Performance on Parallel Codes

Wednesday, March 28, 2018

The analysis of parallel programs often focuses on the concurrency achieved, the impact of data transfers, or the dependences between computations in different processes. In the previous POP Webinar #3 we looked at how to characterize such aspects of application behaviour and provide insight on the fundamental causes of parallel efficiency loss. The total performance and scaling behaviour of applications depends not only on its parallel efficiency, but also on how the sequential computations between calls to the runtime change with increased core count.

This webinar presented a model characterizing the evolution of the performance of the user level code between calls to the runtime (MPI or OpenMP). It presented a methodology based on three metrics derived from hardware counter measurements that gives a global view of the scaling of sequential computations and how it impacts the overall application scaling. The webinar also showed how advanced analytics can be used to identify the regions in the code with different sequential cost and performance, how it evolves when scaling core counts. Finally, it presented some analyses examples of the insight that can be obtained with these techniques.

The presentation slides are also available here.

About the Presenter

Jesus Labarta is full professor of Computer Architecture at the Technical University of Catalonia (UPC) since 1990.
Since 2005, he has been responsible for the Computer Science Research Department within the Barcelona Supercomputing Center (BSC). His major directions of current work relate to performance analysis tools, programming models and resource management. His team distributes the Open Source BSC tools (Paraver and Dimemas) and performs research on increasing the intelligence embedded in the performance analysis tools. He is involved in the development of the OmpSs programming model and its different implementations for SMP, GPUs and cluster platforms.