Readiness of HPC Extreme-scale Applications

ISC HPC 2024 Workshop

Thursday, May 16, 2024, 2:00pm - 6:00 pm



In November 2022 the first exa-scale supercomputer appeared in the Top500 list, after many years of pursuing the exa-scale goal, and several more are expected within the next year. Now is the time for application software to demonstrate its readiness for extreme-scale computer systems composed from large assemblies of a heterogeneous variety of CPU processors and GPU accelerators. Europe has been addressing this challenge for the last eight years through its Centres of Excellence (CoEs) for HPC applications, funded to greatly extend the scalability of a large selection of HPC codes and improve their efficiency and performance. To broaden this discussion to a larger community, this ISC workshop is organised to provide a forum to consider common challenges, ideas, solutions, and opportunities from the point of view of HPC applications developers preparing for exa-scale. ISC is the leading HPC conference in Europe, gathering not only the main HPC vendors and providers but also developers and standardising committees from programming models, compilers, and other system software. However, we still need one of the key players in this ecosystem, the HPC applications! With this workshop, we seek to cover this gap and expand the ISC conference to HPC code developers.


  • Marta García-Gasulla
    Researcher and Team Leader, Barcelona Supercomputing Center
  • Brian J. N. Wylie
    Research Scientist, Forschungszentrum Jülich GmbH, Jülich Supercomputing Centre

The ISC schedule can be found here

Preliminary workshop agenda:

14.00 Welcome & Introduction to workshop (Garcia & Wylie)

  • “EuroHPC JU: Supporting the European HPC application ecosystem” (Linda Gesenhuis & Mladen Skelin, EuroHPC JU)

14.45 Presentations

  • “NEKO: A modern, portable, and scalable framework for high-fidelity computational fluid dynamics” (Niclas Jansson, KTH/S)
  • “Performance portable and scalable particulate flow simulations using the waLBerla framework” (Harald Köstler, FAU/D)
  • “GROMACS” (Szilárd Páll, KTH/S)
  • “Deploying your software just once for all EuroHPC supercomputers is EESSI” (Lara Peeters, UGhent/B)
  • “Exascale for mid-scale applications” (Simon Burbidge, DiRAC/UK)

16.00 Break

16.30 Keynote: “The Exascale Computing Project: Outcomes and Lessons Learned” (Lois Curfman McInnes, ANL)

17.00 Panel discussion with European HPC applications CoEs (moderator: Guy Lonsdale, scapos/D)

  • Niclas Jansson (KTH/S) 
  • Erwan Raffin (Eviden/F)
  • Nicola Spallazani (CNR/I) 

17.45 Wrap (Garcia & Wylie)

18.00 Adjourn



Title: “The Exascale Computing Project: Outcomes and Lessons Learned”

Lois Curfman McInnes (ANL)

Abstract: The U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP) recently successfully completed its work in developing a capable exascale computing ecosystem comprising applications, software technologies, and deployment and integration capabilities. We discuss major accomplishments and lessons learned by the ECP community over the course of seven years on the development of an integrated scientific computing software stack (which enables and fosters success on a wide variety and scales of computers) and the demonstration of new physics capabilities in a wide variety of scientific applications.  We emphasize issues in creating the exascale ecosystem, particularly in algorithm design and implementation for accelerator-based compute nodes, performance portability across a range of platforms, fostering strong collaborations across multidisciplinary teams, and managing and measuring the success of a computational science project of this scale.


Title: “NEKO: A modern, portable, and scalable framework for high-fidelity computational fluid dynamics”

Niclas Jansson(KTH/S)

Abstract: With Exascale computing capabilities on the horizon, we have seen a transition to more heterogeneous architectures for Computational Fluid Dynamics (CFD) applications. Traditional homogeneous scalar processing machines are replaced with heterogeneous machines that combine scalar processors with various accelerators, such as GPUs. While offering high theoretical peak performance and high memory bandwidth, complex programming models and significant programming investments are necessary to exploit these systems efficiently. The main goal of the Centre of Excellence for Exascale CFD (CEEC) is to address extreme-scale computing challenges to enable the use of large-scale, accurate and cost-efficient high-fidelity CFD simulations for both academic and industrial applications. We present one of the core codes in CEEC, Neko - a portable framework for high-fidelity spectral element flow simulations. Focusing on Neko’s performance and exascale readiness, we outline the optimisation and algorithmic work necessary to ensure scalability and performance portability across a wide range of platforms.


Title: “Performance portable and scalable particulate flow simulations using the waLBerla framework”

Harald Köstler (FAU/D)

Abstract: In our talk we present latest numerical results from particulate flow simulations conducted on EuroHPC machines using the multi-physics simulation framework waLBerla. The EuroHPC infrastructure makes it possible to investigate the complex interplay of multiple physical phenomena at unprecedented scales. Our results demonstrate the capability of Walberla to run on different HPC platforms, where the performance portability is achieved by code generation technology. Exemplary applications are material transport in riverbeds or antidunes.


Title: “GROMACS: meeting exascale portability and performance challenges”

Szilárd Páll (KTH/S)

Abstract:GROMACS is a widely used molecular dynamics software package, one of the most commonly deployed scientific applications across HPC centers in Europe and worldwide. It has a strong focus on scalable algorithms, bottom-up performance optimisation and portability in practice. Long-term investment into adapting fundamental algorithms and parallelisation to heterogeneous, accelerated systems as well as into portability have allowed GROMACS to run efficiently on all exascale architectures. However, harnessing the increasingly diverse systems and making use of the massive parallelism is not without challenges. This talk will discuss our efforts to use portable programming models without sacrificing performance on new heterogeneous architectures, as well as co-design efforts to develop fine-grained algorithms to improve scalability on upcoming systems.


Title: “Deploying your software just once for all EuroHPC supercomputers is EESSI”

Lara Peeters (UGhent/B)

Abstract: European Centers of Excellence (CoEs) focus on optimizing software for specific research areas. They refine existing code to boost performance, efficiency, and scalability. Once complete, these improvements are shared publicly through EuroHPC centers, benefiting both academic and industrial researchers. However, this last step in deploying scientific software to EuroHPC centers is not straightforward. Modern supercomputers are built using a heterogeneous architecture in processors and accelerators, making the installation step quite tedious, challenging, and in some cases, repetitive if the project aims to target more than one European supercomputer. This talk will introduce the European Ecosystem for Scientific Software Installations (EESSI). Developed by the MultiXscale CoE, EESSI offers a streamlined approach for deploying scientific software across EuroHPC centers.


Title: “Exascale for mid-scale applications”

Simon Burbidge (DiRAC/UK)

Abstract: Investment in exascale has paid off for a good number of grand challenge and top-tier HPC use cases. How can we make it work for the more common mid-scale applications and runs used in most computational research? There are a myriad of codes across the domains. Is the challenge too big for us and how much effort might be needed to show success.



Introduction - Brian Wylie, JSC

NEKO: A modern, portable, and scalable framework for high-fidelity computational fluid dynamics - Niclas Jansson, KTH

GROMACS - Szilárd Páll, KTH

Deploying your software just once for all EuroHPC supercomputers is EESSI - Lara Peeters, UGhent

Exascale for mid-scale applications - Simon Burdbidge, DiRAC

Keynote: “The Exascale Computing Project: Outcomes and Lessons Learned” - Lois Curfman McInnes, ANL

esiwace3 - Erwan Raffin, Eviden

ChEESE - Erwan Raffin, Eviden

MaX – "Materials design at the eXascale European Centre of Excellence" - Nicola Spallazani, CNR