HPC Cluster: Linux Kernel Upgrade
The Linux Kernel on the CLAIX18 compute nodes is being upgraded to kernel version 4.18.0-477.21.1. To maximise the availability of the compute cluster, the mandatory reboot of the nodes is scheduled as a reboot job, thus allowing all already submitted and running jobs for completion before the upgrade takes place.
Please note that the reboot is prioritised over other jobs, and some nodes may be temporarily unavailable after the reboot.
Best regards,
Your HPC-Team@RWTH
You can track any disruptions or security advisories that may occur due to the aforementioned change in the Email category on our status reporting portal.
Release Notes Version 2.4.0 – New default theme “System”
Improvements and bug fixes:
- In-app browser is used for links
- New default theme “System” was added
- Redirection when clicking on dashboard widgets is set up: “calendar,” “grades.”
- Some authentication issues were fixed
- Various bug fixes
Change in SSH Configuration: Depreciation of Insecure Methods, Addition of New Methods
As the result of a recent security evaluation, we have decided to disable several methods in key exchange, message authentication codes and encryption ciphers classified insecure/weak which obsoletes the following methods and method groups as listed below. In general, we have disabled SHA-1-based methods since SHA-1 is broken since early 2017 (cf. Stevens et al.: “The first collision for Full SHA-1”).
We kindly ask you to update your client configuration accordingly since these methods cannot be used anymore to access the RWTH Aachen HPC Cluster until further notice: Read the rest of this entry »
New version of the RWTH Person Directory and RWTHcontacts
New version of the RWTH Person Directory and RWTHcontacts. Just like in the RWTH Directory of Organizations, buildings and addresses can be selected using the RWTH building management system. Selected buildings are linked to the RWTH Navigator in RWTHcontacts.
Resource limits on HPC dialog systems changed
We have reduced the per-user-resource limits for main memory on the HPC dialog systems (login18-1.hpc.itc.rwth-aachen.de etc.). A single user can now only use about 25% of the available main memory, i.e. 96GB for most of our servers. On login18-x-1 and login18-x-2, as before, only 16 GB are available to each user.
Release Notes Version 2.3.0
Improvements and bug fixes:
- Toasts for notifications have been improved, e.g., after deleting favorites, so that you can now “hold” them and they do not disappear immediately
- If you go to a page such as the calendar without being logged in to the app (without SSO login), you will now be redirected back to that page after completing the login process (previously, you were redirected to the dashboard).
- In Moodle:
- for assessments, the maximum achiveable score is now also displayed
- in the download center, labels were previously displayed as files that could be downloaded; this was not desired and they are no longer displayed.
- Minor color adjustments
- Minor bug fixes, e.g., when displaying the loading animation.
Changes to Abaqus Batch Jobs
Today, we have made several major changes to the Abaqus configuration on the RWTH-HPC systems. These changes include an automatic configuration of many batch job parameters which previously have been generated on-the-fly as part of the example batch script we described in Abaqus documentation .
With this new setup, users should experience less problems when starting Abaqus batch jobs and receive descriptive error messages including suggested solutions when something goes wrong during configuration. These changes only affect Abaqus 2020 and newer.
Do I need to change my batch scripts?
No. The previous example batch scripts still work. However, you may omit the part that generates a local abaqus_v6.env file. If such a file exists in the job’s working directory, its settings will override the system settings but in most cases these are identical as of now.
What to do if I experience problems?
If you have trouble running the Abaqus GUI on a frontend node, please make sure to delete the abaqus_v6.env file from the working directory. Alternatively, you can try starting the GUI from another directory. If you still experience problems or if your batch jobs behave in an unexpected fashion, please report the issue to servicedesk@itc.rwth-aachen.de
OS Upgrade to Rocky 8.8
Dear users of the cluster,
on
** July 17, 2023 from 7:00 a.m. to 5:00 p.m. **
there will be a maintenance where we will update the current operating system Rocky Linux 8.7 to Rocky Linux 8.8. The front ends will also be updated, so you will not be able to log into the cluster or access your data.
However, there is an exception to this. The MFA test engine login18-4 will remain accessible, but you will only be able to log in there with a second factor [1]. Temporarily, however, $HPCWORK will also be unreachable here, as the Lustre file system is also undergoing maintenance.
We do not expect that you will have to recompile your software or change your job scripts. So your jobs should start normally after the end of the maintenance.
With best regards
Your HPC Team @ RWTH
Release Notes Version 2.2.1 – Use of preferred map apps
Improvements and bug fixes:
- Some color adjustments in dark mode
- Incorrect spacing in the “Student Councils” section were fixed
- Login via in-app browser is now possible
- The app did not start when the authentication service was down; this has now been fixed
- For links to map apps (e.g., Google Maps or similar), the preferred app can now be selected by the user
- The app can be downloaded via F-Droid again
CLAIX-2016 EOL
CLAIX-2016 already reached its end of life for a while. For convenience reasons we still operate the following systems:
- CLAIX-2016 dialog (“login”) nodes:
login.hpc.itc.rwth-aachen.de
login-g.hpc.itc.rwth-aachen.de
login-t.hpc.itc.rwth-aachen.de - Data Transfer node:
copy.hpc.itc.rwth-aachen.de - CLAIX-2016-SMP nodes (144 cores, 2TB main memory):
lns02.hpc.itc.rwth-aachen.de
lns03.hpc.itc.rwth-aachen.de
We will switch off all remaining nodes on **July, 10th 2013**. Please use CLAIX-2018 login / transfer nodes in future:

