Kategorie: ‘RWTH-HPC’
HPC Slurm privacy mode has been disabled
UPDATE: This feature has been temporarily disabled due to incompatibilities with https://perfmon.hpc.itc.rwth-aachen.de/
Slurm commands within the ITC HPC Cluster have been changed to hide personal Slurm information from other users.
- Users are prevented from viewing jobs or job steps belonging to other users.
- Users are prevented from viewing reservations which they can not use.
- Users are prevented from viewing usage of any other user with Slurm.
If you experience any problems, please contact us as usual via servicedesk@itc.rwth-aachen.de with a precise description of the features you are using and your problem.
Temporary Deactivation of User Namespaces
Update 08.02.24:
We have installed a bugfix release for the affected software component and enabled user namespaces again.
Dear users,
due to an open security issue we are required to disable the feature of so-called user namespaces on the cluster. This feature is mainly used by containerization software and affects the way apptainer containers will behave. The changes are effective immediately. Most users should not experience any interruptions. If you experience any problems, please contact us as usual via servicedesk@itc.rwth-aachen.de with a precise description of the features you are using. We will reactivate user namespaces as soon as we can install the necessary fixes for the aforementioned vulnerability.
Terrapin Attack Counter Measures (SSH)
A recently discovered flaw in the implementation of the Secure Shell (SSH) protocol lead to an attack vector called „Terrapin Attack“ enables an attacker to break the integrity of the „secure shell“ connection in order weaken the overall security. TL;DR To implement an effective counter measure against the attack, we have disabled the affected methods in the HPC cluster’s SSH configuration. Consequently, these methods cannot be used until further notice:
- Ciphers: ChaCha20-Poly1305
- MACs: Any etm method (e.g. hmac-sha2-512-etm@openssh.com)
Please adapt your configuration accordingly if your configuration is relying on the methods mentioned above.
The attack is only feasible when a using either the ChaCha20-Poly1305 Cipher or a combination of a Cipher Block Chaining (CBC) cipher (or, in theory, a Counter Mode (CTR) cipher) combined with an encrypt then MAC (etm) message authentication code (MAC) method and the attacker has the ability to act as a man-in-the-middle. (Example: A security suite on your client machine may perform a deep packet inspection (per definition a (hopefully „good“) man-in-the-middle) to protect you from other threats.)
The Galois Counter Mode (GCM) AES ciphers are not affected.
We encourage you to employ strong encryption ciphers such as aes256-gcm@openssh.com and a sufficiently strong MAC method (e.g. hmac-sha2-256 or hmac-sha2-512) immune to the attack vector.
Note:
Due to a bug in the Windows OpenSSH client employing the umac-128@openssh.com MAC as default, we disabled the problematic method in the SSH server configuration as well to minimize issues when connecting to the HPC cluster. Until further notice, only hmac-sha2-512 and hmac-sha2-256 can be employed as MAC. Please adapt your configuration accordingly, if required, e.g.:
Ciphers aes256-gcm@openssh.com,aes256-ctr MACs hmac-sha2-512,hmac-sha2-256
You can track any disruptions or security advisories that may occur due to the aforementioned change in the Email category on our status reporting portal.
(English) Multi-Factor Authentication Mandatory starting 15 January 2024
OS Upgraded to Rocky 8.9
During the last cluster maintenance, the OS of the HPC cluster was upgraded to Rocky Linux 8.9 due to the EOL of Rocky 8.8 to ensure continous update support for the systems.
The upgrade provides a modernized system base and security enhancements. The user view, usage and the expectable performance of the cluster remain unchanged.
You can track any disruptions or security advisories that may occur due to the aforementioned change in the Email category on our status reporting portal.
CLAIX Systemwartung am 27.11.2023
Sehr geehrte Nutzende des RWTH Compute Clusters,
am 27.11.2023 wird der komplette Cluster von 8-12 Uhr aufgrund einer Systemwartung nicht zur Verfuegung stehen.
Mit freundlichen Gruessen,
Ihr HPC-Team
Etwaige auftretende Störungen oder Sicherheitshinweise aufgrund des genannten Changes in der Kategorie RWTH-HPC könnt ihr auf unseren Statusmeldungsportal verfolgen.
(English) End of Apptainer Pilot Phase
We are happy to announce that, after a long pilot phase, we are granting all users full access to use Apptainer containers on the cluster. Containers are virtual environments that allow running an identical software configuration across several systems, e.g., two different HPC systems, and simplify the setup of software that only runs well on other Linux distributions. Apptainer also supports the conversion of Docker images and can thus run a vast variety of existing images with little to no extra effort.
Previously, we only allowed curated container images as part of our software stack and individual images on a per-case basis. Starting today, users can build and run their own container images anywhere on the cluster.
If you are interested in using Apptainer, please take a look at our documentation[1] and read the „Best Practices“ section to get started and avoid common problems. As part of our efforts to support containerized workloads in HPC, we will also grow our collection of container images in the module system and provide a set of Claix-specific base images for various scenarios that can be used as a foundation for your own container images.
[1] https://help.itc.rwth-aachen.de/service/rhr4fjjutttf/article/e6f146d0d9c04d35aeb98da8d261e38b/
CLAIX-2018 dialog systems
Due to the high load on the login / dialog nodes affecting their usability, we decided to reduce the maximum usable cores on each login node to four cores for each user. Please note: These login nodes should be used for programming, preparation and minimal post processing of batch jobs. They are not intended for production runs or performance tests. For longer tests (max. 25 minutes), parallel debugging, compiling, etc., you can use our „devel“ partition by adding „#SBATCH –partition=devel“ to batch jobs or interactively with „salloc -p devel“.
For all productive jobs, please use our batch system **without** „#SBATCH –partition=devel“. If you want to more learn more about the batch system, we invite you to our Slurm introduction.
You can track any disruptions or security advisories that may occur due to the aforementioned change in the RWTH-HPC category on our status reporting portal.
FastX Server Component Upgraded to Version 3.3.39
The FastX server component installed on the HPC frontend nodes was upgraded to version 3.3.39.
The update contains security enhancements and several bugfixes from which all users benefit when using FastX.
Please ensure to use the latest desktop client if you are using FastX when accessing the cluster.
For more information on how to access the RWTH Aachen Compute cluster via FastX, please refer to the ITC Help Page
You can track any disruptions or security advisories that may occur due to the aforementioned change in the Email category on our [status reporting](https://maintenance.rz.rwth-aachen.de/ticket/status/messages/14-rechner-cluster) portal.
ARMForge becomes LinaroForge
With Linaro’s acquisition of the Forge toolset from ARM, the popular parallel debugger DDT and the performance analysis tool Performance Reports are now available as Linaro Forge.
The newest version of the toolset is available on CLAIX via module load LinaroForge/23.0.2
.
You can track any disruptions or security advisories that may occur due to the aforementioned change in the RWTH-HPC category on our status reporting portal.