BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IT Center Events - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IT Center Events
X-ORIGINAL-URL:https://blog.rwth-aachen.de/itc-events/en
X-WR-CALDESC:Events for IT Center Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20110327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20111030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20120325T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20121028T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20130331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20131027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20120319
DTEND;VALUE=DATE:20120324
DTSTAMP:20260425T183853
CREATED:20210515T172218Z
LAST-MODIFIED:20210515T172218Z
UID:4243-1332115200-1332547199@blog.rwth-aachen.de
SUMMARY:PPCES 2012
DESCRIPTION:Parallel Programming in Computational Engineering and Science 2012\n\n\n\n\n\n\nHPC Seminar and Workshop\nMonday\, March 19 – Friday\, March 23\, 2012\nExtension Building of the\nCenter for Computing and Communication RWTH Aachen University \nKopernikusstraße 6\, seminar rooms 3+4 \n\n\n\n\n\nKindly supported by:\n  \n\n\n\n\n\n\nIntroduction\n\n\nThis event  continued  the tradition of previous annual week-long events taking place in Aachen every spring since 2001. \n\n\n\n\nThroughout the week\, we will cover serial (Monday) and parallel programming using MPI (Tuesday) and OpenMP (Wednesday) in Fortran and C / C++ as well as performance tuning addressing both Linux and Windows platforms. Furthermore\, we will introduce the participants to GPGPU programming (Thursday) and provide ample opportunities for hands-on exercises including a “bring-your-own-code” session on Friday. \nThese topics are presented in a modular way\, so that you can choose\, pick and register for single days in order to let you invest your time as efficiently as possible. \nWhile we assume that those who would like to work on Windows during the lab sessions have some basic experience with the MS Visual Studio programming environment\, we will give an additional introduction to this environment on Monday at 11:00 am. \nPlease note that there are two additional introductory courses on the specifics of the programming environment of the RWTH Compute Cluster for (local) users beforehand. They will cover topics like login procedures and interactive and batch usage of the RWTH Compute Cluster. \n\nA short introduction to the RWTH Compute Cluster programming environment for Windows users on Wednesday\, March 7 at 9 am. Registration is closed!\nA short introduction to the RWTH Compute Cluster programming environment for Linux Users on Wednesday\, March 14 at 9 am.\n\nAgenda and Registration\nPlease find the agenda here as pdf  \n\nPart I: Introduction\, Parallel Computing Architectures\, Serial Tuning – Monday\, March 19\nAfter an introduction to the principles of today’s parallel computing architectures\, the configuration of the new components of the RWTH Compute Cluster delivered by the company Bull will be explained. As good serial performance is the basis for good parallel performance\, we cover serial tuning before introducing parallelization paradigms. \n– During your registration\, please indicate whether you plan to take part in the introduction to MS Visual Studio at Monday\, 11 am  in the “remarks” field. \n– During your registration\, please indicate whether you plan to use Linux and/or Windows for your hands-on exercises in the “remarks” field. \nPart II: Message Passing with MPI – Tuesday\, March 20\nThe Message Passing Interface (MPI) is the de-facto standard for programming large HPC Clusters. We will introduce the basic concepts and give an overview of some advanced features. Furthermore\, we will introduce the TotalView debugger and a selection of performance tools. We will also cover hybrid parallelization\, i.e. the combination of MPI and shared memory programming. Hybrid parallelization is gaining popularity as the number of cores per cluster node is growing. \n– During your registration\, please indicate whether you plan to use Linux and/or Windows for your hands-on exercises in the “remarks” field. \nPart III: Shared Memory Programming with OpenMP – Wednesday\, March 21\n\nOpenMP is a widely used approach for programming shared memory architectures\, which is supported by most compilers nowadays. We will cover the basics of the programming paradigm as well as some advanced topics\, such as programming NUMA machines or clusters\, coherently coupled with the vSMP software from ScaleMP. We will also cover a selection of performance and verification tools for OpenMP. The RWTH Compute Cluster comprises a large number of big SMP machines (up to 128 cores and 2 TB of main memory) as we consider shared memory programming a vital alternative for applications which cannot be easily parallelized with MPI. We also expect a growing number of application codes to combine MPI and OpenMP for clusters of nodes with a growing number of cores. \n\n– During your registration\, please indicate whether you plan to take part in the social event on Wednesday 7 pm  in the “remarks” field. \n– During your registration\, please indicate whether you plan to use Linux and/or Windows for your hands-on exercises in the “remarks” field. \nPart IV: GPGPU Programming – Thursday\, March 22\nWe will study GPU programming as a parallel programming skill. We will briefly introduce GPU architectures and explain why GPU programming differs from multicore parallel programming. CUDA is NVIDIA‘s architecture for executing highly parallel applications on their GPUs; we will introduce NVIDIA‘s CUDA C extensions in comparison with the OpenCL standard\, showing basic concepts\, performance measurement and tuning. Then\, we will look at the PGI Accelerator Model\, a directive-based approach to program GPUs and give a glimpse into the upcoming OpenACC programming paradigm for accelerators. \nPart V: Lab Exercises\, Tune your own Code – Friday\, March 23\nAt the end of a week full of presentations and hands-on sessions\, we would like to give you the opportunity to dive into some more practical details\, to continue working on lab exercises or to get started with porting or tuning your own codes. Don’t put your expectations too high though\, as time for working on large codes is limited. You will profit more from this opportunity the better you are prepared. \n– During your registration\, please indicate whether you plan to use Linux and/or Windows for your hands-on exercises in the “remarks” field. \n– During your registration\, please indicate whether you plan to bring your own code in the “remarks” field. \n\n\n\nSpeakers\n\nThomas Warschko and Oliver Fortmeier\, Bull\nBernd Dammann\, DTU\, Lyngby\, Denmark\nMembers of the HPC Team of the RWTH Center for Computing and Communication\n\n\n\nIMPORTANT (for RWTH members only)!\nPlease note that all RWTH members (except UKA) need a PC pool account (which is not the same as the cluster account) to take part in the hands-on sessions. \nPlease find related information here. (achtung hier Seite PC Pool accounts aktivieren) \n\n\nParticipants\nAttendees should be comfortable with C/C++ or Fortran programming and interested in learning more about the technical details of application tuning and parallelization. The presentations will be given in English.\n\n\nSocial Event – sponsored by Bull\n  \n\nCourse Materials\n\n\n\nComputer Architecture\n\nPPCES2012-Introduction-DaM.pdf\nenvironment_lab.tar.gz\nenvironment2012.pdf\nParallel_Computer_Architecture_Basics.pdf\nserial_tuning2012.tar.gz\nRWTH-PPCES-2012.pdf\nlsf.pdf\n\n\n\nGPGU\n\n2012_GPUIntro.pdf\n2012_OpenCLIntro.pdf\nGPUDirective-based.pdf\n2012_GPUClusterAtRZ.pdf\n2012_GPUTuning.pdf\ngpgpu-lab_ppces2012.zip\nGPU_quickref.zip\nPPCES12_OpenCL_case_study.pdf\n\n\n\nHybrid Parallelization\n\n06_DaM_Hybrid_CaseStudy_FLOWer.pdf\n07_DaM_OMP-MPI_CaseStudy_AdaptiveIntegration.pdf\n\n\n\nMPI\n\n01_MPI_Tutorial_2012.pdf\n02_MPI_Lab_PPCES_2012.pdf\nmpilab.tar.gz\n03_MSMPI_and_C_CPP_in_VS2010.pdf\n04_MSMPI_and_Fortran_with_VS2008.pdf\nmpilab+solutions.tar.gz\nIntel_MPI_Tools.pdf\nPPCES2012_DaM_ParallelizationStrategies.pdf\nPPCES2012_OF_CaseStudy.pdf\n\n\n\nOpenMP\n\nOMP1-Introduction_to_OpenMP.pdf\nOMP2-OpenMP_Performance_Tuning_p1.pdf\nOMP3-OpenMP_on_NUMA.pdf\nExercises_OMP_PPCES2012.pdf\nOMPx-OpenMP_Literature.pdf\nex_omp_ppces.tar.gz\nOpenMP_and_Tools.pdf\nFex_omp_ppces.tar.gz\nScaleMP.pdf\n\n\n\nVisual Studio\n\n01_Windows_HPC_Server_Overview.pdf\n02_Windows_HPC_Server_Users_View.pdf\n03_HPC_on_Windows_VisualStudio_ISV.pdf\n04_VS2010_Best_Practices.pdf\nA_Lab.pdf\nlab.zip\n\n\n\nSocial Event – sponsored by Bull\n   The social event will take place on  Wednesday\, March 21 at 7pm in Kazan Annastraße 26. \nCourse Materials\n\nRecent materials of the course can be found here. \n(You can find last year’s course materials here.) \n\nFotos >>>>\n\n\n\nCosts\nThere is no seminar fee. All other costs (e.g. travel\, hotel\, and consumption) are at your own expenses.\n\n\n\nLinks\n\n\nThe previous PPCES 2011 event.\nThe HPC Web site of the RWTH Center for Computing and Communication\nThe RWTH Compute Cluster User’s Guide\n\n\n\n\n  \n\nLocation\n\nRWTH Aachen University\nCenter for Computing and Communication\, Extension Building\nKopernikusstraße 6\, 52074 Aachen\nSeminar Room 3 + 4 \nContact\nSandra Wienke\nTel.: +49 241 80 24761\nFax/UMS: + 49 241 80 624761\nE-mail: hpcevent@rz.rwth-aachen.de
URL:https://blog.rwth-aachen.de/itc-events/en/event/ppces-2012/
CATEGORIES:HPC Events,PPCES
END:VEVENT
END:VCALENDAR