BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IT Center Events - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IT Center Events
X-ORIGINAL-URL:https://blog.rwth-aachen.de/itc-events
X-WR-CALDESC:Veranstaltungen für IT Center Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20160327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20161030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20170326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20171029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20180325T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20181028T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20170320
DTEND;VALUE=DATE:20170325
DTSTAMP:20260501T010212
CREATED:20210515T171817Z
LAST-MODIFIED:20210515T171817Z
UID:4233-1489968000-1490399999@blog.rwth-aachen.de
SUMMARY:PPCES 2017
DESCRIPTION:Parallel Programming in Computational Engineering and Science 2017\n\n\n\n\n \nkindly sponsored by\n\n \n  \nHPC Seminar and Workshop\nMarch\, 20 – 24\,  2017\nIT  Center RWTH Aachen University\nKopernikusstraße 6\nSeminar Room 3 + 4\n\n\n                    \n \n \n \n \n \n  \nPlease find information about last year’s\n PPCES 2016 event here >>>\n\n\n \n \n \nFEEDBACK:\n  \n \n\n\nWe look forward to your feedback\nto improve next year’s ppces here >>>  \n\n\n\n\n\nAbout PPCES\nThis event continues the tradition of previous annual week-long events that take place in Aachen every spring since 2001. \nThroughout the week we will cover parallel programming using OpenMP and MPI in Fortran and C/C++ and performance tuning. Furthermore\, we will introduce the participants to GPGPU programming with OpenACC. Hands-on exercises for each topic will be provided\, which should not discourage you from working on your own code. \nThe topics will be presented in a modular way\, so that you could pick specific ones and register for the particular days only in order to let you invest your time as efficiently as possible. Please register separately for each event day. \nGuest lectures by Ruud van der Pas (Oracle)\, Thomas Röhl (RRZE)\, Bernd Dammann (TUD) and case studies complete the program. \nAgenda and Course Materials\n\nPlease find the Agenda here >>>\n\nShared Memory Programming with OpenMP – Day I \n\n01_IntroductionToOpenMP.pdf\n02_OpenMPTaskingInDepth.pdf\n03_OpenMPNumaSimd.pdf\n04_OpenMPSummary.pdf\nExercises_OMP.pdf\, source code: openmp_ex_ppces_2017.tar.bz2\n\nShared Memory Programming with OpenMP – Day II \n\nIntel Threading Tools\nPerformance Analysis with LIKWID\nOpenMP 4.x: One kernel for CPU\, Xeon Phi and GPU\n\nMessage Passing with MPI – Day I \n\n01_PPCES2017_MPI_Tutorial.pdf\n02_PPCES2017_MPI_Tutorial.pdf\nPPCES2017_MPI_Lab.pdf\, source code: PPCES2017_MPI_Lab.tar.bz2\nDebugging with TotalView\n\nMessage Passing with MPI – Day II \n\n03_PPCES2017_MPI_Tutorial.pdf\nPPCES2017_MPI_Performance_tools.pdf\nPPCES2017_Correctness_Tools.pdf\n\nGPGPU Programming with OpenACC \n\nPPCES2017_OpenACC.pdf\nPPCES2017_OpenACC_ProgrammingLab.pdf\nPPCES2017_OpenACC-Lab.tar.gz\n\n\n\nParticipants\nAttendees should be comfortable with C/C++ or Fortran programming and interested in learning more about the technical details of application tuning and parallelization. The presentations will be given in English. \nRegistration – is open!!\nI. + II. OpenMP is a widely used approach for programming shared memory architectures\, supported by most compilers nowadays. We will cover the basics of the programming paradigm as well as some advanced topics such as programming NUMA machines. We will also cover a selection of performance and verification tools for OpenMP. The RWTH Compute Cluster contains a number of big SMP machines (up to 144 cores and 2 TB of main memory) as we consider shared memory programming a vital alternative for applications that cannot be easily parallelized with MPI. We also expect a growing number of application codes to combine MPI and OpenMP for clusters of nodes with a growing number of cores. \nFurthermore\, we will introduce the participants to modern features of the OpenMP 4 standard like vectorization and programming for accelerators and for the Many Integrated Core (MIC) architecture. \n\nMarch 20 – Part I Shared Memory Programming with OpenMP Day I\nMarch 21 – Part II Shared Memory Programming with OpenMP Day II\n\nIII. + IV. The Message Passing Interface (MPI) is the de-facto standard for programming large HPC systems. We will introduce the basic concepts and give an overview of some advanced features. Also covered is hybrid parallelization\, i.e. the combination of MPI and shared memory programming\, which is gaining popularity as the number of cores per cluster node grows. Furthermore\, we will introduce the TotalView debugger and a selection of performance and correctness-checking tools (Score-P\, Vampir\, MUST). \n\nMarch 22 – Part III Message Passing with MPI Day I\nMarch 23 – Part IV Message Passing with MPI Day II\n\nV. OpenACC is a directive-based programming model for accelerators\, which enables delegating the responsibility for low-level (e.g. CUDA or OpenCL) programming tasks to the compiler. Using the OpenACC industry standard\, the programmer can offload compute-intensive loops to an attached accelerator with little effort. We will give an overview on OpenACC while focusing on NVIDIA GPUs. We will cover topics such as the GPU architecture\, offloading loops\, managing data movement between hosts and devices\, using managed memory\, tuning data movement\, hiding latencies\, and writing heterogeneous applications (CPU + GPU). Finally\, we will also compare OpenACC to OpenMP device constructs. Hands-on sessions are done on the RWTH Aachen GPU (Fermi) Cluster using PGI‘s OpenACC implementation. \n\nMarch 24 – Part V GPGPU Programming with OpenACC                                               Register here >>>\n\nTravel Information \nPlease make your own hotel reservation. You may find a list of hotels in Aachen on the web pages of Aachen Tourist Service. We recommend that you try to book a room at the Novotel Aachen City\, Mercure am Graben or Aachen Best Western Regence hotels. These are nice hotels with reasonable prices within walking distance (20-30 minutes\, see city map) from the IT Center through the old city of Aachen. An alternative is the IBIS Aachen Marschiertor hotel\, located close to the main station\, which is convenient if you are traveling by train and also prefer to commute to the IT Center by train (4 trains per hour\, 2 stops). \nPlease\, download a sketch of the city (pdf\, 415 KB) with some points of interest marked.\nYou may find a description of how to reach us by plane\, train or car here.\nBus lines 33 and 73 connect the city (central bus station) and the Mies-van-der-Rohe-Straße bus stop 6 times per hour.\nMost trains between Aachen and Düsseldorf stop at station Aachen West\, which is 10 minutes walk away from the IT Center.\nFrom the bus stop and the train station just walk up Seffenter Weg. The first building on the left side at the junction with Kopernikusstraße is the IT Center of RWTH Aachen University. The event will take place in the extension building located at Kopernikusstraße 6.\nThe weather in Aachen is usually unpredictable. It is always a good idea to carry an umbrella. If you’ll bring one\, it might be sunny.\n\n\n\n\n\n\n\n\n\n\nContact\nPaul Kapinos\nTel.: +49 (241) 80-24915\nFax/UMS: +49 (241) 80-624915\nE-mail:  hpcevent@itc.rwth-aachen.de
URL:https://blog.rwth-aachen.de/itc-events/event/ppces-2017/
CATEGORIES:HPC Events,PPCES
END:VEVENT
END:VCALENDAR