Kategorien
Seiten
-

IT Center Events
Lade Veranstaltungen

« Alle Veranstaltungen

  • Diese Veranstaltung hat bereits stattgefunden.

PPCES 2013

Montag, 18. März 2013 - Freitag, 22. März 2013

Parallel Programming in Computational Engineering and Science 2013

HPC Seminar and Workshop

Monday, March 17 – Friday, March 21, 2013

Extension Building of the
Center for Computing and Communication RWTH Aachen University

Kopernikusstraße 6, seminar rooms 3 + 4

kindly supported by:

Introduction

This event will continue the tradition of previous annual week-long events taking place in Aachen every spring since 2001.

Throughout the week, we will cover serial (Monday) and parallel programming using MPI (Tuesday) and OpenMP (Wednesday) in Fortran and C / C++ as well as performance tuning. Furthermore, we will introduce the participants to GPGPU programming with OpenACC (Thursday) and provide a brief introduction to the new Intel Many Integrated Core Architecture (Friday) as well as the opportunities for hands-on exercises including a „bring-your-own-code“ session.

These topics are presented in a modular way, so that you can choose, pick and register for single days in order to let you invest your time as efficiently as possible.

Agenda and Registration

The preliminary   agenda can be found here.

Part I: Introduction, Parallel Computing Architectures, Serial Tuning – Monday, March 17

After an introduction to the principles of today’s parallel computing architectures, the configuration of the new components of the RWTH Compute Cluster delivered by the company Bull will be explained. As good serial performance is the basis for good parallel performance, we cover serial tuning before introducing parallelization paradigms

Part II: Message Passing with MPI – Tuesday, March 18

The Message Passing Interface (MPI) is the de-facto standard for programming large HPC Clusters. We will introduce the basic concepts and give an overview of some advanced features. Furthermore, we will introduce the TotalView debugger and a selection of performance tools. We will also cover hybrid parallelization, i.e. the combination of MPI and shared memory programming. Hybrid parallelization is gaining popularity as the number of cores per cluster node is growing. 

Part III: Shared Memory Programming with OpenMP – Wednesday, March 19

OpenMP is a widely used approach for programming shared memory architectures, which is supported by most compilers nowadays. We will cover the basics of the programming paradigm as well as some advanced topics, such as programming NUMA machines or clusters, coherently coupled with the vSMP software from ScaleMP. We will also cover a selection of performance and verification tools for OpenMP. The RWTH Compute Cluster comprises a large number of big SMP machines (up to 128 cores and 2 TB of main memory) as we consider shared memory programming a vital alternative for applications which cannot be easily parallelized with MPI. We also expect a growing number of application codes to combine MPI and OpenMP for clusters of nodes with a growing number of cores.

palladion

There was a sponsored social dinner on Wednesday, March 13 at 19:00 in Restaurant Palladion Aachen

Part IV: GPGPU Programming with OpenACC – Thursday, March 20

OpenACC is a directive-based programming model for accelerators which enables delegating the responsibility for low-level (e.g. CUDA or OpenCL) programming tasks to the compiler. To this end, using the OpenACC API, the programmer can offload compute-intensive loops to an attached accelerator with little effort. The open industry standard OpenACC has been introduced in November 2011 and supports accelerating regions of code in standard C, C++ and Fortran. It provides portability across operating systems, host CPUs and accelerators.

During this workshop day, we will give an overview on OpenACC while focusing on NVIDIA GPUs. We will introduce the GPU architecture and explain very briefly how a usual CUDA program looks like. Then, we will dive into OpenACC and learn about its possibilities to accelerate code regions. We will cover topics such as offloading loops, managing data movement between host and device, tuning data movement and accesses, applying loop schedules, using multiple GPUs or interoperate with CUDA libraries. At the end, we will give an outlook to the OpenMP 4.0 standard that may include OpenMP for accelerators. Hands-on sessions are done on the RWTH Aachen GPU (Fermi) Cluster using PGI’s OpenACC implementation.  

Part V: Programming the Intel® Xeon Phi™ Coprocessor, Tune your own Code – Friday, March 21

Accelerators, like GPUs, are one way to fulfill the requirement for more and more compute power. However, they often require a laborious rewrite of the application using special programming paradigms like CUDA or OpenCL.The Intel Xeon Phi coprocessor is based on the Intel Many Integrated Core Architecture and can be programmed with standard techniques like OpenMP, POSIX threads, or MPI. We will give a brief introduction to this new architecture and demonstrate the different programming possibilities. For the labs you can also continue working on lab exercises or to get started with porting or tuning your own codes (not only for Xeon Phi). Don‘t put your expectations too high though, as time for working on large codes is limited. You will profit more from this opportunity the better you are prepared.

 

Speakers

 

IMPORTANT (for RWTH members only)!

Please note that all RWTH members (except UKA) need a PC pool account (which is not the same as the cluster account) to take part in the hands-on sessions.

Please find related information here.

Participants

Attendees should be comfortable with C/C++ or Fortran programming and interested in learning more about the technical details of application tuning and parallelization. The presentations will be given in English.

 

Course Materials

(You can find last year’s course materials here or  here.)

Material

  1.  Poster
  2.  Flyer

 

Costs

There is no seminar fee. All other costs (e.g. travel, hotel, and consumption) are at your own expenses.

Location

RWTH Aachen University
Center for Computing and Communication, Extension Building
Kopernikusstraße 6, 52074 Aachen
Seminar Room 3 + 4

Travel Information

Please make your own hotel reservation.You can find some popular hotels listed here. You may find a complete list of hotels on the web pages of Aachen Tourist Service. We recommend that you try to book a room at the „Novotel Aachen City„, “ Mercure am Graben“ or „Aachen Best Western Regence“ hotels. These are nice hotels with reasonable prices within walking distance (20-30 minutes) from the computer center through the old city of Aachen. An alternative is the „IBIS Aachen Marschiertor“ hotel which is close to the main station, which is convenient if you are traveling by train and also want to commute to the Center by train (4 trains per hour, 2 stops)

Please, download a sketch of the city (pdf, 415 KB) with some points of interest marked.
You may find a description of how to reach us by plane, train or car here.
Bus routes 33 and 73 connect the city (central bus station) and the stop „Mies-van-der-Rohe-Straße“ 6 times per hour.
Most trains between Aachen and Düsseldorf stop at „Aachen West“ station which is a 5 minutes walk away from the center.
From the bus stop and the train station just walk uphill the „Seffenter Weg“. The first buildung on the lefthand side at the junction with „Kopernikusstraße“ is the Computing Center. The event will take place in the extension building in the „Kopernikusstraße“.
The weather in Aachen is usually unpredictable. It is always a good idea to carry an umbrella. If you’ll bring one, it might be sunny!

Contact

Tim Cramer
Tel.: +49 241 80 24924
Fax/UMS: + 49 241 80 624924
E-mail: hpcevent @rz.rwth-aachen.de

Details

Beginn:
Montag, 18. März 2013
Ende:
Freitag, 22. März 2013
Veranstaltungskategorien:
,