ESiWACE stands for Centre of Excellence in Simulation of Weather and Climate in Europe. It is a new initiative for the HPC ecosystem in Europe, which teams up two established European networks: the European Network for Earth System modelling (ENES) representing the European climate modelling community and the world leading European Centre for Medium-Range Weather Forecasts (ECMWF).

The main goal of ESiWACE is to substantially improve efficiency and productivity of numerical weather and climate simulations on high-performance computing platforms by supporting the end-to-end workflow of global Earth system modelling in HPC environment.

Besides, using the opportunities and tackling the challenges of the upcoming exascale era, ESiWACE will establish demonstrator simulations sporting the highest affordable resolutions (target 1km). This will yield insights into the computability of such configurations sufficient to address key scientific challenges in weather and climate prediction.

The work plan of ESiWACE is organized in five work packages (WP), two of which deal with the governance of ESiWACE products and services (WP1) and coordination of the project itself (WP5), and three encompassing the bulk of the technical and scientific work:

  • WP2 on "Scalability" demonstrates how to build and productively operate global cloud-resolving and eddy-resolving models, thanks to more efficient model codes and tools (model coupler, I/O libraries).
  • WP3 on "Usability" aims to considerably improve the ease-of-use of available tools, computing and data handling infrastructures.
  • WP4 on "Exploitability" tackles the major roadblocks that hinder efficient use of the considerable amounts of data produced by such simulations.
The graph illustrates the spirit of creating a centre for the Earth system modeling and weather prediction community. The interaction of the three scientific/technical work packages which focus on the three ESiWACE themes is supported on the administrative level by WP5 and steered by WP1 to guarantee that the work is informed by and responds to community requirements.

The project has been funded by Horizon 2020, call H2020-EINFRA-2015-1 "Centres of Excellence for computing applications" of the DG Connect.

MPI-M participation
Our institute is the leader of the "Usability" WP3. The work here is covering two branches simultaneously. The aim of the first one is the development of Cylc, a meta-scheduler, which is supposed to increase the community's ability to cope with increasing workflow complexity for both climate and weather applications in production and research modes. The corresponding task within the WP is lead by Met Office and the role of MPI-M in this activity is limited to general coordination.

The focus of the second branch, in which MPI-M puts most of its resources dedicated to the project, is the improvement of the ease-of-use of the software, computing and data-handling infrastructure for Earth System Modelling (ESM) scientists. In the first phase of the project, together with the Barcelona Supercomputing Center, the University of Reading, and the German Climate Computing Center, we identified common software packages used within the ESM community. We worked out recommendations on how to install and configure them. The results of this activities have been published as two living documents: "Application Software Framework: A White Paper", and "How to select, configure and install ESM software stacks: Handbook for system administrators".

In the course of our research, it became obvious that Earth System Models and their workflows require many pieces of software to be correctly installed; the same applies to NWP models. Long lists of dependencies have to be resolved and accounted for during the deployment process.

Software dependency tree of a minimalistic workflow that includes ICON and CDO

The uncertainty of the border between the areas of responsibility of the users on the one hand, who are usually in charge of installing applications they need for their workflows, but are not familiar with the system software, and the system administrators on the other hand, who are not familiar with all aspects of applications of their users, introduces a gap to be filled before the software environment can be used productively.

Transition between areas of responsibilities of users and system administrators

These issues are usually addressed by package managers. These are tools, which help users to install software they need without diving into details of software dependencies, configuration and compilation. The main problem with those managers is that most of them are designed to work in a particular well-defined and well-tested software environment (usually for a single compiler, and with root privileges): These are, unfortunately, not available for most supercomputers. Further research of this class of tools left us with a short list of possible solutions, among which Spack appeared to be the most suitable one.

Spack is an open source package manager from Lawrence Livermore National Laboratory. The tool was designed for large supercomputing centres. It supports multiple versions and configurations of software stacks on a wide variety of (mostly HPC) platforms and environments. The flexibility and comprehensible structure of the Spack code helped us to adjust it to the needs of our community and enable automatic installation of most of the packages described in the living documents.

The current results of the work already significantly accelerate the ESM software deployment process, even for non-experienced users. Nonetheless, there is a lot of room for improvements towards stable and flexible solutions. These are our goal for the second phase of the ESiWACE project.

For more details on our work in ESiWACE please contact Sergey Kosukhin (sergey.kosukhin@we dont want spammpimet.mpg.de) or Reinhard Budich (reinhard.budich@we dont want spammpimet.mpg.de).