- This event has passed.
PPCES 2018
Monday, 12. March 2018 - Friday, 16. March 2018
Parallel Programming in Computational Engineering and Science 2018
kindly sponsored by |
HPC Seminar and WorkshopMarch, 12 – 16, 2018IT Center RWTH Aachen UniversityKopernikusstraße 6Seminar Room 3 + 4 |
Following PPCES 2019 event here >>> |
---|
About PPCES
This event continued the tradition of previous annual week-long events that take place in Aachen every spring since 2001.
Throughout the week we covered parallel programming using OpenMP and MPI in Fortran and C/C++ and performance tuning. Furthermore, we introduced the participants to GPGPU programming with OpenACC. Hands-on exercises for each topic were provided, which did not discouraged the participants from working on their own code.
The topics were presented in a modular way, so that the participants could pick specific ones and register for the particular days only in order to let them invest their time as efficiently as possible.
Guest lectures by Ruud van der Pas (Oracle), Jan Eitzinger (RRZE) and Thomas Röhl (RRZE), completed the program.
Agenda and Course Materials
Message Passing with MPI – Day I
Message Passing with MPI – Day II
- 02_PPCES2018_MPI_Tutorial.pdf
- 03_PPCES2018_MPI_Tutorial.pdf
- PPCES2018_Correctness_Tools.pdf
- Performance_101_Score-P_Scalasca.pdf
- PPCES2018_MPI_Vampir.pdf
- More details on performance tools at the website of the last VI-HPS workshop in Aachen
- BTMZ_CLAIX.pdf
Shared Memory Programming with OpenMP – Day I
- OpenMP Threading Tools
- 01_IntroductionToOpenMP.pdf
- 02_OpenMPTaskingInDepth.pdf
- 03_OpenMPNumaSimd.pdf
- 04_OpenMPSummary.pdf
- Lab:
- Getting OpenMP up to Speed
- POP-wokflow.pdf
- ProPE: Node Level Performance Engineering and Performance Patterns
- Performance Analysis with LIKWID
GPGPU Programming with OpenACC
Participants
Attendees should be comfortable with C/C++ or Fortran programming and interested in learning more about the technical details of application tuning and parallelization. The presentations will be given in English.
Registration – closed
I. + II. The Message Passing Interface (MPI) is the de-facto standard for programming large HPC systems. We will introduce the basic concepts and give an overview of some advanced features. Also covered is hybrid parallelization, i.e. the combination of MPI and shared memory programming, which is gaining popularity as the number of cores per cluster node grows. Furthermore, we will introduce a selection of performance and correctness-checking tools (Score-P, Vampir, MUST)
III. + IV. OpenMP is a widely used approach for programming shared memory architectures, supported by most compilers nowadays. We will cover the basics of the programming paradigm as well as some advanced topics such as programming NUMA machines. We will also cover a selection of performance and verification tools for OpenMP. The RWTH Compute Cluster contains a number of big SMP machines (up to 144 cores and 2 TB of main memory) as we consider shared memory programming a vital alternative for applications that cannot be easily parallelized with MPI. We also expect a growing number of application codes to combine MPI and OpenMP for clusters of nodes with a growing number of cores.
V. OpenACC is a directive-based programming model for accelerators, which enables delegating the responsibility for low-level (e.g. CUDA or OpenCL) programming tasks to the compiler. Using the OpenACC industry standard, the programmer can offload compute-intensive loops to an attached accelerator with little effort. We will give an overview on OpenACC while focusing on NVIDIA GPUs. We will cover topics such as the GPU architecture, offloading loops, managing data movement between hosts and devices, using managed memory, tuning data movement, hiding latencies, and writing heterogeneous applications (CPU + GPU). Finally, we will also compare OpenACC to OpenMP device constructs. Hands-on sessions are done on RWTH’s CLAIX Cluster with NVIDIA Pascal GPUs using PGI‘s OpenACC implementation.
Travel Information
Please make your own hotel reservation. You may find a list of hotels in Aachen on the web pages of Aachen Tourist Service. We recommend that you try to book a room at the Novotel Aachen City, Mercure am Graben or Aachen Best Western Regence hotels. These are nice hotels with reasonable prices within walking distance (20-30 minutes, see city map) from the IT Center through the old city of Aachen. An alternative is the IBIS Aachen Marschiertor hotel, located close to the main station, which is convenient if you are traveling by train and also prefer to commute to the IT Center by train (4 trains per hour, 2 stops).
Most trains between Aachen and Düsseldorf stop at station Aachen West, which is 10 minutes walk away from the IT Center.
From the bus stop and the train station just walk up Seffenter Weg. The first building on the left side at the junction with Kopernikusstraße is the IT Center of RWTH Aachen University. The event will take place in the extension building located at Kopernikusstraße 6.
Contact
Paul Kapinos
Tel.: +49 (241) 80-24915
Fax/UMS: +49 (241) 80-624915
E-mail: hpcevent@itc.rwth-aachen.de