Categories
Pages
-

IT Center Changes

Release Notes Version 2.42.0

October 14th, 2025 | by

Improvements and bug fixes:

  • Animations and behavior of pins on the maps have been adjusted
  • Various bug fixes

Opencast: Upgrade to version 18

October 10th, 2025 | by

New Project Limits

October 9th, 2025 | by

As you know, we operate our cluster in 1-cluster-concept. This also means that the system is supported by multiple funding sources with different budgets. The NHR share (Tier-2) is the largest. Unfortunately, the project quota for RWTH-s, RWTH-thesis and RWTH-lecture (Tier-3) in the machine learning segment (GPU partition) is currently overbooked. Therefore, the following changes will take effect immediately for all new project applications (already granted projects are not affected):

  • RWTH-s projects: reduction of the maximum project size to 4000 GPU-h per project per year
  • RWTH-lecture and -thesis projects: reduction of the maximum project size to 1500 GPU-h per project

In order to ensure that all users can still successfully carry out their research projects, we propose the following approach for future proposals:

  • If you do not need more than 4000 GPU-h, please apply for RWTH-s as before.
  • If you do not need more than 10000 GPU-h and would like to conduct a project in the field of machine learning or artificial intelligence, please apply for a WestAI project.
  • If you do not need more than 10000 GPU-h, you are welcome to apply once (!) for an NHR Starter project. Please note that this category is intended as preparation for an NHR Normal or Large project. An extension is therefore only possible with a full proposal and should be for more than 10000 GPU-h. NHR Starter projects are allocated centrally by the NHR office; if you choose this route, please select RWTH as your preferred center.
  • If you need more than 10000 GPU-h, please apply for resources within an NHR Normal or Large project. To ensure appropriate use of the system, NHR projects undergo a science-led peer-review process.

We are convinced that this step is in the interest of all users in order to avoid overloading the GPU partition (ML segment).

Furthermore, please note that in future, NHR Normal projects may request up to 20 million core-h.

Please refer to the project catalogue for a complete overview (including a decision flow chart),

Please contact servicedesk@itc.rwth-aachen.de for any further questions.

Updates on Migration to DataStorage.nrw

October 1st, 2025 | by

The migration of all Coscine projects has been ongoing since the beginning of July 2025: The previous storage system, Research Data Storage (RDS), is being completely replaced by Datastorage.nrw. Step by step, all projects are therefore being transferred to the new infrastructure.

Around 1380 projects have already been migrated – that’s about 1 PB of the total planned 2.5 PB. Project owners with particularly large amounts of data were contacted directly to arrange individual dates for the migration.

The speed of the transfer varies depending on the project and depends, among other things, on the number of resources and the quantity and size of the files contained.

We will provide regular updates on further progress.

Release Notes Version 2.41.0

September 23rd, 2025 | by

Improvements and bug fixes:

  • Various bug fixes

Slurm GPU HPC resource allocation changing on the 01.11.2025

September 17th, 2025 | by

The CLAIX HPC systems will be changing the way GPU resources are requested and allocated starting on the 01.11.2025.
Users submitting Slurm Jobs will no longer be able to request arbitrary amounts of CPU and Memory resources when using GPUs on GPU nodes.
Requesting an entire GPU node’s memory or all CPUs, but only a single GPU will no longer be possible.
Each GPU within a GPU node will have a corresponding strict maximum of CPUs and Memory that can be requested.
To obtain more than the strict maximum of CPUs or Memory per GPU, more GPUs will need to be requested too.
The specific limits per GPU on GPU nodes will be eventually documented separately.
Users are expected to modify their submission scripts or methods accordingly.

This change is driven by our efforts to update the HPC resource billing mechanism to comply with NHR HPC directives.
NHR requires that computing projects apply for CPU and GPU resources independently.
NHR also requires that HPC Centers track the use of these CPU and GPU resources.
The independent resources are then accounted for by Slurm jobs within our CLAIX nodes.
Therefore CPU nodes will only track CPUs (and equivalent memory) and GPU nodes will only track GPUs used.
The quota tools will eventually reflect this too.

New GPT-5 Models (v1.7.0)

September 5th, 2025 | by

Features:

  • Newly supported models:
    • gpt-5, gpt-5-mini, gpt-5-nano, gpt-5-chat
    • Beta support for various models from the Llama, Mistral, Gemma, Qwen, Teuken, and InternVL families
  • Users with access to multiple user groups can now switch between groups in the frontend
  • Added notifications for automatically deleted chats, along with a new modal for manually deleting or extending these chats
    • [ADMIN] Configurable retention period for automatic chat deletion per tenant
  • [ADMIN] New report: “Hourly Activity Heatmap”
  • [ADMIN] Tenants can now publish info banners visible to all users within the tenant
  • [ADMIN] Automatic email notifications at 80%, 90%, and 100% of a user group’s hard limit consumption
  • [ADMIN] For each deployment, an optional data processing region (Germany, EU, Worldwide) can be selected; this information is also displayed in the frontend

Bugfixes and Improvements:

  • Content pages are now displayed correctly across all tenants
  • Optimized print layout for chat history
  • [ADMIN] Help menu entries and deployments are now sortable via drag & drop
  • [ADMIN] In the deployment modal, model-type-specific parameters are only displayed after a model has been selected
  • [ADMIN] “Reasoning Effort Level” is now a mandatory field; the maximum token limit can no longer be set to 0
  • [ADMIN] Model selection in the deployment modal is now sorted alphabetically
  • [ADMIN] For supported models, a price can now be defined for cached input tokens
  • [ADMIN] Deployments can now be duplicated
  • [ADMIN] Deployments are automatically set to inactive if their endpoint is deleted
  • [ADMIN] Improved axis labeling in reports

New GPT-5 Models (v1.7.0)

September 5th, 2025 | by

Features:

  • Newly supported models:
    • gpt-5, gpt-5-mini, gpt-5-nano, gpt-5-chat
    • Beta support for various models from the LlamaMistralGemmaQwenTeuken, and InternVL families
  • Users with access to multiple user groups can now switch between groups in the frontend
  • Added notifications for automatically deleted chats, along with a new modal for manually deleting or extending these chats

Bugfixes and Improvements:

  • Content pages are now displayed correctly across all tenants
  • Optimized print layout for chat history

Release Notes Version 2.40.0

September 2nd, 2025 | by

Improvements and bug fixes:

  • Animations have been improved in some places
  • Various bug fixes

Integration of GPT-5 Chat

August 20th, 2025 | by
  • GPT-5 Chat integrated as the language model
    • GPT-5 Chat prioritizes conversational tone, immediate helpfulness, and faster responses
    • Optimized for clarity, brevity, friendliness, and consistent chat behavior