BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IT Center Events - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IT Center Events
X-ORIGINAL-URL:https://blog.rwth-aachen.de/itc-events/en
X-WR-CALDESC:Events for IT Center Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20210322
DTEND;VALUE=DATE:20210327
DTSTAMP:20260504T074311
CREATED:20210204T120017Z
LAST-MODIFIED:20211130T083354Z
UID:2835-1616371200-1616803199@blog.rwth-aachen.de
SUMMARY:PPCES 2021
DESCRIPTION:About PPCES\nThis one week online event will continue the tradition of previous annual week-long events that take place in Aachen every spring since 2001. We will cover the basics of parallel programming using OpenMP and MPI in Fortran and C/C++ and a first step towards performance tuning as well as current topics in AI/machine learning. Hands-on exercises for each topic will be included. \nThe contents of the courses are generally applicable but will be specialized towards CLAIX the compute cluster which is the current system installed at the RWTH’s IT Center. It might be helpful to read through the information which was provided during the HPC introduction on March 12 this year. This is especially true if you want to actively use CLAIX after this event. \nOpenMP is a widely used approach for programming shared memory architectures\, supported by most compilers nowadays. We will cover the basics of the programming paradigm as well as some advanced topics such as programming NUMA machines. The nodes of the RWTH Compute Cluster contain an increasing number of cores and thus we consider shared memory programming a vital alternative for applications that cannot be easily parallelized with MPI. We also expect a growing number of application codes to combine MPI and OpenMP for clusters of nodes with a growing number of cores. \nThe Message Passing Interface (MPI) is the de-facto standard for programming large HPC systems. We will introduce the basic concepts and give an overview of some advanced features. Also covered is hybrid parallelization\, i.e.\, the combination of MPI and shared memory programming\, which is gaining popularity as the number of cores per cluster node grows. \nMachine Learning: We provide an overview to end-to-end deep learning with the latest version of Tensorflow/Keras. It covers the basic concepts to define models with Keras  and data pipelines with Tensorflow’s “Dataset”\, and to visualize the results with Tensorboard while training. If training on one node or GPU is not enough\,  we show how to scale up/out distributed training onto multiple compute nodes  and GPUs with Horovod. Furthermore\, we provide an introduction to scikit-learn\, with an overview of  different machine learning algorithms it provides and how to utilize it on GPUs  with H2O4GPU. The training courses consist of a hands-on exercises to be run directly on  RWTH infrastructure. \nGuest Speakers\nWe are very happy to present two guest speakers: \n\nRuud van der Pas (Oracle) on OpenMP\nGeorg Zitzlsberger (IT4Innovations National Supercomputing Center at VSB – Technical University of Ostrava) on Machine Learning\n\nAgenda\nDay 1+2: OpenMP \nMonday\, March 22 \n\n\n\n\n\n\nMonday\, March 22 \n\n\n\nDay 1: OpenMP Part I \n\n\n\n\n\n\n\n\n09:00 – 10:30\nOpenMP Basics part 1 \nChristian Terboven\n\n\n11:00 – 12:00\n OpenMP Basics part 2 (incl. Lab)\nChristian Terboven\n\n\n14:00 – 15:30 \n OpenMP Basics part 3 (incl. Lab)\nChristian Terboven\n\n\n16:00 – 17:00 \nOpenMP Basics part 4 (incl. Lab)\nChristian Terboven\n\n\n\n\n\n\n\n\n\n\nTuesday\, March 23\n\n\n\n\nDay 2: OpenMP Part II\n\n\n\n\n\n\n\n\n\n09:00 – 10:30\nGetting OpenMP up to speed\nRuud van der Pas\n\n\n11:00 – 12:00\nOpenMP SIMD\nTim Cramer\n\n\n14:00 – 15:30 \nOpenMP Advanced Tasking (incl. Lab)\nChristian Terboven\n\n\n16:00 – 17:00 \nOpenMP for Accelerators\nChristian Terboven\n\n\n\n\nDay 3+4: MPI\n\n\n\n\n\n\n\nWednesday March 24\n\n\n\n\nDay 3: MPI Part I\n\n\n\n\n\n\n\n\n\n09:00 – 10:30\nIntroduction to MPI\nMarc-Andre Hermanns\n\n\n11:00 – 12:00\nBlocking Point-to-Point Communication I\nMarc-Andre Hermanns\n\n\n14:00 – 15:30 \nBlocking Point-to-Point Communication II\nMarc-Andre Hermanns\n\n\n16:00 – 17:00 \nNon-blocking Point-to-Point Communication\nMarc-Andre Hermanns\n\n\n\n\n\n\n\n\n\n\nThursday March 25\n\n\n\n\nMPI Part II\n\n\n\n\n\n\n\n\n\n09:00 – 10:30\nBlocking Collective Communication\nMarc-Andre Hermanns\n\n\n11:00 – 12:00\nCommunicator Basics\nMarc-Andre Hermanns\n\n\n14:00 – 15:30 \nHybrid Programming\nMarc-Andre Hermanns\n\n\n16:00 – 17:00 \nOutlook on Advanced Topics & Wrap-Up\nMarc-Andre Hermanns\n\n\n\n\nDay 5: Machine Learning \nSeminar times will be 9:00-12:00 and 13:00-15:00. \nThis event is partially supported by The [Czech] Ministry of Education\, Youth and Sports from the Large Infrastructures for Research\, Experimental Development and Innovations project  “e-Infrastruktura CZ – LM2018140” \n\n\n\n\n\n\nFriday March 26\n\n\n\n\nDay 5: Machine Learning\n\n\n\n\n\n\n\n\n\n09:00 – 09:45\nIntroduction to scikit-learn\nGeorg Zitzlsberger\n\n\n09:45 – 10:00\nGetting Started on the Cluster\nJannis Klinkenberg\n\n\n10:00 – 10:30\nHands-on scikit-learn examples\nGeorg Zitzlsberger\n\n\n11:00 – 12:00\nIntroduction to Deep Neural Networks\nGeorg Zitzlsberger\n\n\n13:00 – 14:00\n\nTensorflow/Keras Exercises (short intro + Hands-on exercise) \n\nDefine Data Pipeline with Dataset\nBuild a Model\nTrain & Visualize with Tensorboard\n\n\nGeorg Zitzlsberger\n\n\n14:00 – 14:45\n\n Multi-GPU with Horovod (incl. short Hands-on) \n\nGeorg Zitzlsberger\n\n\n14:45 – 15:00\nQ&A\nGeorg Zitzlsberger\n\n\n\n\n\nPrerequisites\nAttendees of part I and II should be comfortable with C/C++ or Fortran programming in a Linux environment and interested in learning more about the technical details of application tuning and parallelization.\nParticipants of part III – machine learning –  will need some basic knowledge of Python. \nAll presentations will be given in English. \nThis event will be an online presentation.\nAll all parts of the tutorials will be accompanied by exercises. \nParticipants who have access to the RWTH identity management can use their own HPC account.\nThose members of RWTH who do not yet have such an account can provide an HPC account here (https://sso.rwth-aachen.de/idp/profile/SAML2/Redirect/SSO?execution=e1s1) using the selfservice (Choose: Accounts und Kennwörter – Account anlegen – Hochleistungsrechnen) \nExternal participants must provide themselves a Linux environment that contains an OpenMP compiler\, a MPI library\, or respectively a singularity environment.\nFor parts I and II  a Linux virtual machine will be sufficient. \nFor example on Ubuntu 20.04 LTS  the following commands can be used to install the necessary software für OpenMP and MPI: \n# g++ is  available by default\nsudo apt install gfortran # install  Fortran Compiler - if necessary\nsudo apt-get install libomp-dev # install OpenMP libraries\nsudo apt install mpich # install MPI library\nSimple program examples can be compiled and executed by \ng++ -fopenmp openmp-hello-world.cc;   ./a.out\ngfortran -fopenmp openmp-hello-world.f90;  ./a.out\nmpicc mpi_hello_world.c -o  ./a.out;  mpirun -np 2  ./a.out\nFor part III (ML) participants need to run singularity containers with access to one or more NVIDIA GPUs. \nCourse Material of PPCES 2021\nOpenMP\nPresentations\n\nOrganization_PPCES2021\n00-openmp-CT-welcome\n01-openmp-CT-overview\n02-openmp-CT-parallel_region\n03-openmp-CT-worksharing\n04-openmp-CT-scoping\n05-openmp-CT-compilers\n06-openmp-CT-welcome\n07-openmp-CT-tasking_motivation\n08-openmp-CT-tasking_model\n09-openmp-CT-taskloop\n10-openmp-CT-tasking_cutoff\n11-openmp-CT-tasking_dependencies\n12-openmp-CT-welcome\n13-openmp-CT-NUMA\n14-openmp-CT-tasking_affinity\n15-openmp-CT-hybrid\n16-openmp-CT-SIMD\n17-openmp-CT-offloading\n\nExercises\n\nExercises_OMP_2021\nopenmp_exercises.zip\n\n  \nMPI\nPresentations\n\nhpc.nrw-01-MPI-Overview\nhpc.nrw-02-MPI_Concepts\nhpc.nrw-03-Blocking_Point-to-Point_Communication\nhpc.nrw-04-Non-blocking-Point-to-Point_Communication\nhpc.nrw-05-Derived_Datatypes\nhpc.nrw-06-Blocking_Collective_Communication\nhpc.nrw-08-Hybrid_Programming\nhpc.nrw-09-Communicator_Handling\nhpc.nrw-XX-Advanced_Topics\n\nExercises\n\nppces2021-MPI-labs-C.zip\nppces2021-MPI-labs-Fortran.zip\n\n  \nMachine Learning\nPresentations\n\n00_Agenda\n01a_Introduction_to_scikit-learn\n01b_scikit-learn_Optimizations\n03_scikit-learn_Hands-On\n04_Introduction_Deep_Neural_Networks\n05_Tensorflow_Keras\n06_Multi-GPU_Horovod\n\nExercises\n\n2021-ppces-ML-DL-instructions\n2021-ppces-exercises-ML-DL.zip\n\nFurther Information\nThe OpenMP part is also available as online tutorial (including videos): https://hpc-wiki.info/hpc/OpenMP_in_Small_Bites
URL:https://blog.rwth-aachen.de/itc-events/en/event/ppces-2021/
LOCATION:IT Center\, Kopernikusstraße 6\, Aachen\, NRW\, 52074\, Deutschland
CATEGORIES:HPC Events,PPCES
ORGANIZER;CN="IT Center":MAILTO:events@itc.rwth-aachen.de
END:VEVENT
END:VCALENDAR