IT Center Blog

Invitation to All Cluster Users

February 2nd, 2024 | by

Source: Pixabay

Hit the keys and join our HPC events in March 2024!

Are you interested in parallel programming, the new CLAIX-2023 platforms, HPC applications in general or would you like to benefit from personal support on site?

Porting & Tuning Workshop & PPCES 2024 – there is something helpful for every cluster user!

Find out in this article where you can register for free today.


VI-HPS Porting and Tuning Workshop 2024

Date: February 26 – March 01, 2024

Location: IT Center RWTH Aachen and parallel at TU Dresden and digital

This free workshop is designed to provide valuable insights into porting applications to the CLAIX-2023 (Sapphire Rapids) platform and improving their performance through tuning. The course includes two main components: porting applications to the new platforms and using performance tools for optimal results.

Detailled Schedule


Your benefit:
The course starts with an introduction to the new platforms CLAIX-2023 at RWTH Aachen University and Barnard at TU Dresden. There you will get to know the new platforms (hardware & software) and learn what you need to pay attention to when porting the application and workflow to the new system. For participants who take part in the tuning part, help will be offered on site. For participants who only take part online, help will be offered via the chat platform Slack.

In the second part of the workshop, various performance tools from VI-HPS members will be presented, which you can use to tune various aspects of your HPC application. A large part of the workshop will focus on applying these tools on site to your own code, with all participants being supported by the tool experts present.


PPCES 2024

Date: March 11 – March 15, 2024

Location: IT Center RWTH Aachen

The event Parallel Programming in Computational Engineering and Science (PPCES) covers the basics of parallel programming with OpenMP and MPI in Fortran and C/C++ and gives a first step towards performance tuning. In addition, we will include current topics from the field of Machine & Deep Learning. In recent years, the basics of accelerator programming have also been covered. PPCES has a long tradition and has been held in Aachen every year since 2001.

Detailled Schedule


Your benefit:
– OpenMP is a widely used approach for programming shared memory architectures that is supported by most compilers today. We will cover the basics of the programming paradigm as well as some advanced topics such as programming NUMA machines. The nodes of the RWTH Compute Cluster contain an increasing number of cores and therefore we consider shared memory programming as an important alternative for applications that cannot be easily parallelized with MPI. We also expect a growing number of application codes that combine MPI and OpenMP for clusters of nodes with an increasing number of cores.

– The Message Passing Interface (MPI) is the de facto standard for programming large HPC systems. We will introduce the basic concepts and give an overview of some advanced features. We will also cover hybrid parallelization, i.e. the combination of MPI and shared memory programming, which is becoming increasingly popular with the growing number of cores per cluster node.

– Machine and deep learning: We provide a basic introduction to machine and deep learning approaches and data processing techniques that support dataset preparation or model selection for training and inference. We will cover the basic concepts of supervised and unsupervised learning such as classification, regression and clustering to get a feel for which technique is appropriate for a given problem. In addition, we will perform several hands-on exercises using popular frameworks such as scikit-learn and PyTorch on our new HPC infrastructure equipped with powerful Intel Sapphire Rapid CPUs and NVIDIA H100 GPUs. In these exercises, we show how to define models, build datasets and training pipelines or use monitoring and visualization of training results, e.g. with Tensorboard. For the deep learning exercises, we first train on a single GPU. If this is not sufficient, we also show how to scale the distributed training to multiple compute nodes or GPUs.


Convinced? Then secure one of the coveted places today!

Not quite convinced yet? The detailed information at both events will help you gain a deeper insight into the topics of each event. If you have any questions, don’t hesitate to contact our HPC team. They will be happy to help and advise you.

We look forward to seeing you!

Responsible for the content of this article is Dunja Gath.

Leave a Reply

Your email address will not be published. Required fields are marked *