Schlagwort: ‘HPC’
NHR4CES Community Workshop 2023
Title: NHR4CES Community Workshop 2023 – Machine Learning in Computational Fluid Dynamics
Event type: Workshop
Date & Time: February 28, 2023, 1.30pm – 5.30pm and March 1, 2023, 9.00am – 1.30pm
Format: Online
Contact: office@nhr.tu-darmstadt.de
Contact persons: Jonas Seng, Dr. Martin Smuda and Ludovico Nista
Desciption
Big data and Machine Learning (ML) are driving comprehensive economic and social transformations and are rapidly becoming a core technology for scientific computing, with numerous opportunities to advance different research areas such as Computational Fluid Dynamics (CFD).
The combination of CFD with ML has been already applied to several CFD configurations and is a promising research direction with the potential to enable the advancement of so far unsolved problems, thanks to the ability of deep models to learn in a hierarchical manner with little to no need for prior knowledge.
However, this approach presents a paradigm shift to change the focus of CFD from time-consuming feature detection to in-depth examinations of relevant features, enabling deeper insight into the physics involved in complex natural processes.
The workshop is designed to highlight some of the areas of the highest potential impact, including improving turbulence and combustion closure modeling, developing reduced-order models, and designing versatile neural network architectures. Emerging ML areas that are promising for CFD, as well as some potential limitations, will be discussed.
The workshop aims at gathering different research groups, by providing a venue to exchange new ideas, discuss challenges, and expose this new research field to a broader community.
Training: Visualization and Analysis of Atomistic Simulation Data in OVITO
Name: Training: Visualization and Analysis of Atomistic Simulation Data in OVITO
Event type: Workshop
Date: November 15 to 17, 2022
Time: 1pm – 5pm
Format: Online via Zoom
Target Audience: HPC user
Contact Person: Dr. Daniel Utt
Desciption
Post-processing and analysis of atomistic simulations are essential steps to extract knowledge from the computed trajectories. Many commonly used algorithms are already implemented in OVITO and ready to be used on large datasets containing up to billions of atoms.
During day 1 you will learn how to load your simulation outputs into OVITO and process them in the graphical user interface using the built-in algorithms. We will discuss the most important tools and practice using them in hands-on exercises.
On day 2 we will look at the development of custom processing algorithms using OVITO’s Python extension interface, which lets you solve more complex analysis tasks.
On the last day we will step away from the graphical user interface and take a look at automated workflow scripts. This feature of OVITO lets you generate batch analysis pipelines that may be executed on HPC infrastructure to perform computationally intensive data analysis and visualization tasks in a reproducible way.
CFD Training Series: Introduction to Turbulence Modeling and Numerical Implementation
Name: CFD Training Series: Introduction to Turbulence Modeling and Numerical Implementation
Event type: Workshop
Date: November 11, 2022
Time: 1pm – 5pm
Format: Online
Target Audience: HPC user
Contact Person: Xiaoyu Wang
Desciption
In this course, the introduction to the structural properties of various turbulence modeling concepts (RANS, LES, and Hybrid RANS/LES) including associated equations will be given. In addition to the presentation, the corresponding computational setup including pre-processing, simulation implementation, and post-processing for some illustrative flow configurations will be provided based on the open-source CFD software OpenFOAM.
CFD Training Series: Introduction to Discontinuous Galerkin Methods for Flow Problems
Name: CFD Training Series: Introduction to Discontinuous Galerkin Methods for Flow Problems
Event type: Workshop
Date: October 28, 2022
Time: 1pm – 5pm
Format: Online
Target Audience: HPC users
Contact Person: Dr. Martin Smuda
Desciption
In this course, we cover the main building blocks to solve fluid flow problems using the Discontinuous Galerkin (DG) method. The course consists of a combination of presentations and hands-on exercises in which a simple DG flow solver is implemented and run on some test cases within our open-source code framework BoSSS.
Training: Introduction to Machine Learning & Deep Learning
Name: Training: Introduction to Machine Learning & Deep Learning
Event type: Workshop
Date: October 19 to 20, 2022
Time: 9am – 1pm (2×4 Hours, 3 hours of presentation, 4 hours of hands-on live exercises, and a discussion)
Format: Online
Target Audience: ML/DL beginners, Scientists who want to apply ML methods
Capacity: Unlimited
Requirements (required knowledge/experience): Laptop and a working internet connection
Contact Person: Zahra Sadeghibogar and Jonas Seng
Desciption
Not only in economics Machine- and Deep-Learning (ML/DL) are inherently used to solve highly complex problems in a datadriven way, but also the scientific community has many use-cases in which ML/DL are useful, e.g. to discover hidden patterns or replace computationally heavy simulations with data-driven approaches.
This workshop will introduce the basics of ML and DL in a theoretical and practical way with the State of Art technologies. The participants will learn how to design ML-models by themselves and will learn about possible pitfalls when applying ML in the real world.
Agenda
Day 1 – October 19, 2022
9:00 – 9:15
Welcome
9:15 – 9:35
Setup
9:35 – 10:00
Data Preparation (Presentation)
10:00 – 10:30
Data Preparation (Hands On)
10:30 – 10:45
Break
10:45 – 11:15
Random Forests (Presentation)
11:15 – 11:45
Random Forests (Hands On)
11:45 – 12:15
Generalized Linear Models (Presentation)
12:15 – 12:45
Generalized Linear Models (Hands On)
12:45 – 13:00
Wrap Up
Day 2 – October 20, 2022
9:00 – 9:15
Welcome
9:15 – 9:45
Neural Networks (Presentation)
9:45 – 10:15
Neural Networks (Hands On)
10:15 – 10:45
Hyperparameter Tuning (Presentation)
10:45 – 11:15
Hyperparameter Tuning (Hands On)
11:15 – 11:30
Break
11:30 – 12:00
Pitfalls of Neural Networks (Presentation)
12:00 – 12:10
Wrap Up
12:10 – 13:00
Discussion & Feedback
Training: Parallelization in OpenFOAM for HPC Deployment
Name: Training: Parallelization in OpenFOAM for HPC Deployment
Event type: Workshop
Date: October 18, 2022
Time: 10am – 4pm
Format: Online
Target Audience: HPC users
Requirements (required knowledge/experience):
The participants are required to have a working operation system (Linux) and the most recent version of OpenFOAM installed. For more details and instructions, please refer to
- https://develop.openfoam.com/Development/openfoam/-/blob/master/doc/Build.md
- https://develop.openfoam.com/Development/openfoam/-/wikis/precompiled/
- https://www.openfoam.com/documentation/system-requirements
To take full advantage of the course offering it would be advisable to have a standing knowledge of using Linux, as it is used for all exercises. Basics in C++ programming will be required for some of the exercises as well as some insights into CFD theory (finite volume method, basic concepts of discretization).
Contact Person: Mohammed Elwardi Fadeli and Dr. Holger Marschall
Desciption
OpenFOAM is an open source, mature and established C++ library for computational continuum mechanics (CCM) including Computational Fluid Dynamics (CFD). For leveraging its full potential, it is crucial to efficiently use the high-performance computing (HPC) resources on modern distributed-memory parallel computer architectures. This must be based on a sound understanding of parallelization in OpenFOAM and HPC techniques available.
The training will be concerned with introducing the participants to the different concepts of parallelization, along with code examples for illustration. Moreover, we will provide hands-on exercises to further deepen and solidify the transferred knowledge. The participants will further gain an overview over the distinct techniques and dedicated tools involved to run a massively parallel computation using OpenFOAM, as well as over ongoing HPC-related activities in research and development.
Training: Process Mining and Scientific Workflows Running on the HPC cluster
Name: Training: Process Mining and Scientific Workflows running on the HPC cluster
Event type: Workshop
Date: October 18, 2022
Time: 9am – 1pm
Format: Online
Target Audience: HPC users
Contact Person: Zahra Sadeghibogar
Desciption
The goal of process mining is to turn event data into insights and actions. On the other side, there exist scientific workflows running on HPC clusters.
Now, how can we combine process mining and scientific workflows running on HPC clusters? The first idea is „Process mining on logs of execution of scientific workflows on HPC“. Process mining on previous executions of a workflow on an HPC system can be used to deduce certain parallel execution parameters such as the number of tasks, number of cores, dedicated memory, etc. So, here we can provide valuable insights and offer optimization ideas for running the tasks on the HPC cluster.
Currently, based on the analysis of the extracted event log, there is limited usage of SLURM as the workflow management system, which means that only a small fraction of accounts declare interdependencies between tasks. So here comes the second idea, which is implementing a workflow engine that runs workflow steps (jobs) on SLURM with correct interdependencies. The third idea could be enabling efficient and distributed execution of process mining operators on HPC clusters. That is to allow users to run process mining workflows on an HPC cluster.
Agenda
8:45 – 9:00
Welcome
9:00 – 9:30
Talk 1: Introduction to Process Mining
9:30 – 10:00
Talk 2: Introduction to HPC challenges to process mining
10:00 – 10:30
Talk 3: Introduction to scientific workflows and workflow management systems
10:30 – 11:00
Break
11:00 – 11:30
Talk 3: The first idea: Process mining on logs of execution of scientific workflows on HPC
11:30 – 12:00
Talk 4: The second idea: Building a workflow engine to run workflow steps on the HPC cluster with correct interdependencies
12:00 – 12:30
Discussion
12:30 – 12:45
Conclusion
Training: Processing and Analyzing Micrographs with Artificial Intelligence
Name: Training: Processing and Analyzing Micrographs with Artificial Intelligence
Event type: Workshop
Date: October 2 to 7, 2022
Format: Hybrid
Target Audience: HPC users
Contact Person: Setareh Medghalchi
Desciption
High performance materials like steels typically possess a heterogeneous microstructure. Owing to this fact, their properties exceed those of the individual components but collection of image data requires observation and analysis of relatively large areas to capture the heterogeneity reliably. High resolution scanning electron microscopy serves as a tool to unravel many of the physical mechanisms of deformation from the sub-micron to the millimeter scale. On the other hand, collection, and analysis of high resolution image data from large areas requires laborious efforts and considerable amount of time, which is why it is not yet performed routinely.
However, new image analysis-based tools in conjunction with the application of deep learning convolutional neural networks (CNN) allows us to handle these data collected from large areas.
In this workshop, we will go through several image analysis techniques using python libraries step by step within jupyter notebooks. In this way, we will introduce the participants to examples of statistical information about specific features of the real microstructures which can be obtained with these methods, including microstructural information like phase fraction and also insights into deformation mechanisms from damage site detection and classification.
Introduction to interactive HPC with JupyterHub at the RWTH
Abstract
The new HPC JupyterHub service at the RWTH Aachen Univerity allows all eligible users of the RWTH Compute Cluster to utilize the existing compute hardware interactively with the use of Jupyter Notebooks. This HPC JupyterHub provides customization of profiles with a variety of programming kernels, software packages, and hardware definitions. This workshop will introduce the HPC JupyterHub service and offer an interactive demo. There will also be a discussion section to explore new use cases.
Agenda
- Introduction to JupyterHub @ RWTH (20 minutes)
- Interactive Demo (40 minutes)
- Q & A Session (30 minutes)
Format
Online via Zooom.
Target Audience
HPC, Simulation Software and Machine Learning users.
Requirements
HPC and VPN account at the RWTH for the interactive demo.
Capacity
Unlimited for the presentation, and 96 for the interactive demo.
Contact Person
Alvaro Frank a.frank@itc.rwth-aachen.de
Registration
Registration for RWTH externals
Registration closing date: September 18th, 2022
PPCES 2021
About PPCES
This one week online event will continue the tradition of previous annual week-long events that take place in Aachen every spring since 2001. We will cover the basics of parallel programming using OpenMP and MPI in Fortran and C/C++ and a first step towards performance tuning as well as current topics in AI/machine learning. Hands-on exercises for each topic will be included.
The contents of the courses are generally applicable but will be specialized towards CLAIX the compute cluster which is the current system installed at the RWTH’s IT Center. It might be helpful to read through the information which was provided during the HPC introduction on March 12 this year. This is especially true if you want to actively use CLAIX after this event.
OpenMP is a widely used approach for programming shared memory architectures, supported by most compilers nowadays. We will cover the basics of the programming paradigm as well as some advanced topics such as programming NUMA machines. The nodes of the RWTH Compute Cluster contain an increasing number of cores and thus we consider shared memory programming a vital alternative for applications that cannot be easily parallelized with MPI. We also expect a growing number of application codes to combine MPI and OpenMP for clusters of nodes with a growing number of cores.
The Message Passing Interface (MPI) is the de-facto standard for programming large HPC systems. We will introduce the basic concepts and give an overview of some advanced features. Also covered is hybrid parallelization, i.e., the combination of MPI and shared memory programming, which is gaining popularity as the number of cores per cluster node grows.
Machine Learning: We provide an overview to end-to-end deep learning with the latest version of Tensorflow/Keras. It covers the basic concepts to define models with Keras and data pipelines with Tensorflow’s “Dataset”, and to visualize the results with Tensorboard while training. If training on one node or GPU is not enough, we show how to scale up/out distributed training onto multiple compute nodes and GPUs with Horovod. Furthermore, we provide an introduction to scikit-learn, with an overview of different machine learning algorithms it provides and how to utilize it on GPUs with H2O4GPU. The training courses consist of a hands-on exercises to be run directly on RWTH infrastructure.
Guest Speakers
We are very happy to present two guest speakers:
- Ruud van der Pas (Oracle) on OpenMP
- Georg Zitzlsberger (IT4Innovations National Supercomputing Center at VSB – Technical University of Ostrava) on Machine Learning
Agenda
Day 1+2: OpenMP
Monday, March 22
Monday, March 22 |
Day 1: OpenMP Part I
|
|
---|---|---|
09:00 – 10:30 | OpenMP Basics part 1 | Christian Terboven |
11:00 – 12:00 | OpenMP Basics part 2 (incl. Lab) | Christian Terboven |
14:00 – 15:30 | OpenMP Basics part 3 (incl. Lab) | Christian Terboven |
16:00 – 17:00 | OpenMP Basics part 4 (incl. Lab) | Christian Terboven |
Tuesday, March 23 |
Day 2: OpenMP Part II |
|
---|---|---|
09:00 – 10:30 | Getting OpenMP up to speed | Ruud van der Pas |
11:00 – 12:00 | OpenMP SIMD | Tim Cramer |
14:00 – 15:30 | OpenMP Advanced Tasking (incl. Lab) | Christian Terboven |
16:00 – 17:00 | OpenMP for Accelerators | Christian Terboven |
Day 3+4: MPI
Wednesday March 24 |
Day 3: MPI Part I |
|
---|---|---|
09:00 – 10:30 | Introduction to MPI | Marc-Andre Hermanns |
11:00 – 12:00 | Blocking Point-to-Point Communication I | Marc-Andre Hermanns |
14:00 – 15:30 | Blocking Point-to-Point Communication II | Marc-Andre Hermanns |
16:00 – 17:00 | Non-blocking Point-to-Point Communication | Marc-Andre Hermanns |
Thursday March 25 |
MPI Part II |
|
---|---|---|
09:00 – 10:30 | Blocking Collective Communication | Marc-Andre Hermanns |
11:00 – 12:00 | Communicator Basics | Marc-Andre Hermanns |
14:00 – 15:30 | Hybrid Programming | Marc-Andre Hermanns |
16:00 – 17:00 | Outlook on Advanced Topics & Wrap-Up | Marc-Andre Hermanns |
Day 5: Machine Learning
Seminar times will be 9:00-12:00 and 13:00-15:00.
This event is partially supported by The [Czech] Ministry of Education, Youth and Sports from the Large Infrastructures for Research, Experimental Development and Innovations project “e-Infrastruktura CZ – LM2018140”
Friday March 26 |
Day 5: Machine Learning |
|
---|---|---|
09:00 – 09:45 | Introduction to scikit-learn | Georg Zitzlsberger |
09:45 – 10:00 | Getting Started on the Cluster | Jannis Klinkenberg |
10:00 – 10:30 | Hands-on scikit-learn examples | Georg Zitzlsberger |
11:00 – 12:00 | Introduction to Deep Neural Networks | Georg Zitzlsberger |
13:00 – 14:00 |
Tensorflow/Keras Exercises (short intro + Hands-on exercise)
|
Georg Zitzlsberger |
14:00 – 14:45 |
Multi-GPU with Horovod (incl. short Hands-on) |
Georg Zitzlsberger |
14:45 – 15:00 | Q&A | Georg Zitzlsberger |
Prerequisites
Attendees of part I and II should be comfortable with C/C++ or Fortran programming in a Linux environment and interested in learning more about the technical details of application tuning and parallelization.
Participants of part III – machine learning – will need some basic knowledge of Python.
All presentations will be given in English.
This event will be an online presentation.
All all parts of the tutorials will be accompanied by exercises.
Participants who have access to the RWTH identity management can use their own HPC account.
Those members of RWTH who do not yet have such an account can provide an HPC account here (https://sso.rwth-aachen.de/idp/profile/SAML2/Redirect/SSO?execution=e1s1) using the selfservice (Choose: Accounts und Kennwörter – Account anlegen – Hochleistungsrechnen)
External participants must provide themselves a Linux environment that contains an OpenMP compiler, a MPI library, or respectively a singularity environment.
For parts I and II a Linux virtual machine will be sufficient.
For example on Ubuntu 20.04 LTS the following commands can be used to install the necessary software für OpenMP and MPI:
# g++ is available by default sudo apt install gfortran # install Fortran Compiler - if necessary sudo apt-get install libomp-dev # install OpenMP libraries sudo apt install mpich # install MPI library
Simple program examples can be compiled and executed by
g++ -fopenmp openmp-hello-world.cc; ./a.out gfortran -fopenmp openmp-hello-world.f90; ./a.out mpicc mpi_hello_world.c -o ./a.out; mpirun -np 2 ./a.out
For part III (ML) participants need to run singularity containers with access to one or more NVIDIA GPUs.
Course Material of PPCES 2021
OpenMP
Presentations
- Organization_PPCES2021
- 00-openmp-CT-welcome
- 01-openmp-CT-overview
- 02-openmp-CT-parallel_region
- 03-openmp-CT-worksharing
- 04-openmp-CT-scoping
- 05-openmp-CT-compilers
- 06-openmp-CT-welcome
- 07-openmp-CT-tasking_motivation
- 08-openmp-CT-tasking_model
- 09-openmp-CT-taskloop
- 10-openmp-CT-tasking_cutoff
- 11-openmp-CT-tasking_dependencies
- 12-openmp-CT-welcome
- 13-openmp-CT-NUMA
- 14-openmp-CT-tasking_affinity
- 15-openmp-CT-hybrid
- 16-openmp-CT-SIMD
- 17-openmp-CT-offloading
Exercises
MPI
Presentations
- hpc.nrw-01-MPI-Overview
- hpc.nrw-02-MPI_Concepts
- hpc.nrw-03-Blocking_Point-to-Point_Communication
- hpc.nrw-04-Non-blocking-Point-to-Point_Communication
- hpc.nrw-05-Derived_Datatypes
- hpc.nrw-06-Blocking_Collective_Communication
- hpc.nrw-08-Hybrid_Programming
- hpc.nrw-09-Communicator_Handling
- hpc.nrw-XX-Advanced_Topics
Exercises
Machine Learning
Presentations
- 00_Agenda
- 01a_Introduction_to_scikit-learn
- 01b_scikit-learn_Optimizations
- 03_scikit-learn_Hands-On
- 04_Introduction_Deep_Neural_Networks
- 05_Tensorflow_Keras
- 06_Multi-GPU_Horovod
Exercises
Further Information
The OpenMP part is also available as online tutorial (including videos): https://hpc-wiki.info/hpc/OpenMP_in_Small_Bites