- Diese Veranstaltung hat bereits stattgefunden.
PPCES 2022
Montag, 21. März 2022 ► 9:00 - Freitag, 25. März 2022 ► 16:00
About PPCES
This one week online event will continue the tradition of previous annual week-long events that take place in Aachen every spring since 2001. We will cover the basics of parallel programming using OpenMP and MPI in Fortran and C/C++ and a first step towards performance tuning as well as current topics in AI/machine learning. Hands-on exercises for each topic will be included.
The contents of the courses are generally applicable but will be specialized towards CLAIX the compute cluster which is the current system installed at the RWTH’s IT Center. It might be helpful to read through the information which was provided during the HPC introduction on March 12 this year. This is especially true if you want to actively use CLAIX after this event.
OpenMP is a widely used approach for programming shared memory architectures, supported by most compilers nowadays. We will cover the basics of the programming paradigm as well as some advanced topics such as programming NUMA machines. The nodes of the RWTH Compute Cluster contain an increasing number of cores and thus we consider shared memory programming a vital alternative for applications that cannot be easily parallelized with MPI. We also expect a growing number of application codes to combine MPI and OpenMP for clusters of nodes with a growing number of cores.
The Message Passing Interface (MPI) is the de-facto standard for programming large HPC systems. We will introduce the basic concepts and give an overview of some advanced features. Also covered is hybrid parallelization, i.e., the combination of MPI and shared memory programming, which is gaining popularity as the number of cores per cluster node grows.
Machine Learning: We provide an overview to end-to-end deep learning with the latest version of Tensorflow/Keras. It covers the basic concepts to define models with Keras and data pipelines with Tensorflow’s “Dataset”, and to visualize the results with Tensorboard while training. If training on one node or GPU is not enough, we show how to scale up/out distributed training onto multiple compute nodes and GPUs with Horovod. Furthermore, we provide an introduction to scikit-learn, with an overview of different machine learning algorithms it provides and how to utilize it on GPUs with H2O4GPU. The training courses consist of a hands-on exercises to be run directly on RWTH infrastructure.
Guest Speakers
We are very happy to present two guest speakers:
- Georg Zitzlsberger (IT4Innovations National Supercomputing Center at VSB – Technical University of Ostrava) on Machine Learning
- Ruud van der Pas (Oracle Linux Engineering) on OpenMP
Agenda
Day 1+2: OpenMP
Monday, March 21 | Day 1: OpenMP Part I | |
---|---|---|
09:00 – 09:10 | Organization PPCES2022 | Daniel Schürhoff |
09:10 – 10:30 | OpenMP Basics part 1 | Christian Terboven |
11:00 – 12:00 | OpenMP Basics part 2 (incl. Lab) | Christian Terboven |
14:00 – 15:30 | OpenMP Basics part 3 (incl. Lab) | Christian Terboven |
16:00 – 17:00 | OpenMP Basics part 4 (incl. Lab) | Christian Terboven |
Tuesday, March 22 |
Day 2: OpenMP Part II |
|
---|---|---|
09:00 – 10:30 | Speed Up Your OpenMP Application Without Doing Much | Ruud van der Pas |
11:00 – 12:00 | OpenMP SIMD | Tim Cramer |
14:00 – 15:30 | OpenMP Advanced Tasking (incl. Lab) | Jannis Klinkenberg |
16:00 – 17:00 | OpenMP for Accelerators | Jannis Klinkenberg |
Day 3+4: MPI
Wednesday March 23 |
Day 3: MPI Part I |
|
---|---|---|
09:00 – 10:30 | Introduction to MPI | Marc-Andre Hermanns |
11:00 – 12:00 | Blocking Point-to-Point Communication I | Marc-Andre Hermanns |
14:00 – 15:30 | Blocking Point-to-Point Communication II | Marc-Andre Hermanns |
16:00 – 17:00 | Non-blocking Point-to-Point Communication | Marc-Andre Hermanns |
Thursday March 24 |
MPI Part II |
|
---|---|---|
09:00 – 10:30 | Blocking Collective Communication | Marc-Andre Hermanns |
11:00 – 12:00 | Communicator Basics | Marc-Andre Hermanns |
14:00 – 15:30 | Hybrid Programming | Marc-Andre Hermanns |
16:00 – 17:00 | Outlook on Advanced Topics & Wrap-Up | Marc-Andre Hermanns |
Day 5: Machine Learning
Seminar times will be 9:00-12:00 and 13:00-16:00.
This event is supported EuroCC project (see below).
Friday March 25 |
Day 5: Machine Learning |
|
---|---|---|
09:00 – 09:45 | Introduction to scikit-learn | Georg Zitzlsberger |
09:45 – 10:00 | Getting Started on the Cluster | Jannis Klinkenberg |
10:15 – 11:00 | Hands-on scikit-learn examples | Georg Zitzlsberger |
11:00 – 12:00 | Introduction to Deep Neural Networks | Georg Zitzlsberger |
13:00 – 14:00 | Tensorflow/Keras Exercises (short intro + Hands-on exercise)
|
Georg Zitzlsberger |
14:15 – 15:30 | Multi-GPU with Horovod (incl. short Hands-on) | Georg Zitzlsberger |
15:30 – 16:00 | Q & A | Georg Zitzlsberger |
Prerequisites
Attendees of part I and II should be comfortable with C/C++ or Fortran programming in a Linux environment and interested in learning more about the technical details of application tuning and parallelization.
Participants of part III – machine learning – will need some basic knowledge of Python.
All presentations will be given in English.
This event will be an online presentation.
All all parts of the tutorials will be accompanied by exercises.
Please register for the event here: https://www.itc.rwth-aachen.de/go/id/sxqf/file/9-4842/?lidx=1
Participants who have access to the RWTH identity management can use their own HPC account.
Those members of RWTH who do not yet have such an account can provide an HPC account here (http://www.rwth-aachen.de/selfservice) using the selfservice (Choose: Accounts und Kennwörter – Account anlegen – Hochleistungsrechnen)
External participants must provide themselves a Linux environment that contains an OpenMP compiler, a MPI library, or respectively a singularity environment.
For parts I and II a Linux virtual machine will be sufficient.
For example on Ubuntu 20.04 LTS the following commands can be used to install the necessary software für OpenMP and MPI:
# g++ is available by default sudo apt install gfortran # install Fortran Compiler - if necessary sudo apt-get install libomp-dev # install OpenMP libraries sudo apt install mpich # install MPI library
Simple program examples can be compiled and executed by
g++ -fopenmp openmp-hello-world.cc; ./a.out gfortran -fopenmp openmp-hello-world.f90; ./a.out mpicc mpi_hello_world.c -o ./a.out; mpirun -np 2 ./a.out
For part III (ML) participants need to run singularity containers with access to one or more NVIDIA GPUs.
Course Material:
OpenMP
00-openmp-CT-welcome
01-openmp-CT-overview
02-openmp-CT-parallel_region
03-openmp-CT-worksharing
04-openmp-CT-scoping
05-openmp-CT-compilers_exercises
06-openmp-CT-welcome
07-openmp-CT-tasking_motivation
08-openmp-CT-tasking_model
09-openmp-CT-taskloop
10-openmp-CT-tasking_cutoff
11-openmp-CT-tasking_dependencies
12-openmp-CT-welcome
13-openmp-CT-NUMA
14-openmp-CT-tasking_affinity
15-openmp-CT-hybrid
16-openmp-CT-SIMD
17-openmp-CT-offloading
Exercises:
MPI
hpc.nrw-01-MPI-Overview
hpc.nrw-02-MPI_Concepts
hpc.nrw-03-Blocking_Point-to-Point_Communication
hpc.nrw-04-Non-blocking-Point-to-Point_Communication
hpc.nrw-05-Derived_Datatypes
hpc.nrw-06-Blocking_Collective_Communication
hpc.nrw-07-Communicator_Handling
hpc.nrw-08-Hybrid_Programming
hpc.nrw-09-Advanced_Topics
Exercises
ppces2022-MPI-labs-fortran.tar.gz
ppces2022-MPI-labs-C.tar.gz
ML
0_Agenda
1_Introduction_to_scikit-learn
2_scikit-learn_Optimization
4_Hands-On
5_Introduction_Deep_Neural_Networks
6_Tensorflow_Keras
7_Multi-GPU_Horovod
Exercises:
2022-ppces-ML-DL-instructions
2022-ppces-excercises-ml-dl.tar.gz
Registration
The registration is now closed, but was available here: https://www.itc.rwth-aachen.de/go/id/sxqf/file/9-4842/?lidx=1
Acknowledgements
This work was supported by the EuroCC project. This project has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 951732. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Germany, Bulgaria, Austria, Croatia, Cyprus, the Czech Republic, Denmark, Estonia, Finland, Greece, Hungary, Ireland, Italy, Lithuania, Latvia, Poland, Portugal, Romania, Slovenia, Spain, Sweden, the United Kingdom, France, the Netherlands, Belgium, Luxembourg, Slovakia, Norway, Switzerland, Turkey, Republic of North Macedonia, Iceland, Montenegro. This project has received funding from the Ministry of Education, Youth and Sports of the Czech Republic (ID:MC2101).