{"id":23772,"date":"2026-02-26T11:00:15","date_gmt":"2026-02-26T10:00:15","guid":{"rendered":"https:\/\/blog.rwth-aachen.de\/itc\/?p=23772"},"modified":"2026-02-26T08:46:38","modified_gmt":"2026-02-26T07:46:38","slug":"ki-halluzinationen-was-heisst-das-genau","status":"publish","type":"post","link":"https:\/\/blog.rwth-aachen.de\/itc\/en\/2026\/02\/26\/ki-halluzinationen-was-heisst-das-genau\/","title":{"rendered":"AI Hallucination \u2013 What Is It?"},"content":{"rendered":"<div class=\"twoclick_social_bookmarks_post_23772 social_share_privacy clearfix 1.6.4 locale-en_US sprite-en_US\"><\/div><div class=\"twoclick-js\"><script type=\"text\/javascript\">\/* <![CDATA[ *\/\njQuery(document).ready(function($){if($('.twoclick_social_bookmarks_post_23772')){$('.twoclick_social_bookmarks_post_23772').socialSharePrivacy({\"txt_help\":\"Wenn Sie diese Felder durch einen Klick aktivieren, werden Informationen an Facebook, Twitter, Flattr, Xing, t3n, LinkedIn, Pinterest oder Google eventuell ins Ausland \\u00fcbertragen und unter Umst\\u00e4nden auch dort gespeichert. N\\u00e4heres erfahren Sie durch einen Klick auf das <em>i<\\\/em>.\",\"settings_perma\":\"Dauerhaft aktivieren und Daten\\u00fcber-tragung zustimmen:\",\"info_link\":\"http:\\\/\\\/www.heise.de\\\/ct\\\/artikel\\\/2-Klicks-fuer-mehr-Datenschutz-1333879.html\",\"uri\":\"https:\\\/\\\/blog.rwth-aachen.de\\\/itc\\\/en\\\/2026\\\/02\\\/26\\\/ki-halluzinationen-was-heisst-das-genau\\\/\",\"post_id\":23772,\"post_title_referrer_track\":\"AI+Hallucination+%E2%80%93+What+Is+It%3F\",\"display_infobox\":\"on\"});}});\n\/* ]]> *\/<\/script><\/div><p><div id=\"attachment_23775\" style=\"width: 310px\" class=\"wp-caption alignright\"><a href=\"https:\/\/blog.rwth-aachen.de\/itc\/files\/2026\/02\/KI-Halluzinationen-scaled.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-23775\" class=\"size-medium wp-image-23775\" src=\"https:\/\/blog.rwth-aachen.de\/itc\/files\/2026\/02\/KI-Halluzinationen-300x200.jpg\" alt=\"Man using laptop to chat with AI is visibly puzzled\" width=\"300\" height=\"200\" srcset=\"https:\/\/blog.rwth-aachen.de\/itc\/files\/2026\/02\/KI-Halluzinationen-300x200.jpg 300w, https:\/\/blog.rwth-aachen.de\/itc\/files\/2026\/02\/KI-Halluzinationen-1024x683.jpg 1024w, https:\/\/blog.rwth-aachen.de\/itc\/files\/2026\/02\/KI-Halluzinationen-768x512.jpg 768w, https:\/\/blog.rwth-aachen.de\/itc\/files\/2026\/02\/KI-Halluzinationen-1536x1024.jpg 1536w, https:\/\/blog.rwth-aachen.de\/itc\/files\/2026\/02\/KI-Halluzinationen-2048x1366.jpg 2048w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><p id=\"caption-attachment-23775\" class=\"wp-caption-text\">Source: <a href=\"https:\/\/de.freepik.com\/vektoren-kostenlos\/konzept-der-kuenstlichen-intelligenz-flachmensch-mit-ki-technologie-zur-hilfe-bei-aufgaben-und-zur-beantwortung-von-fragen_57453376.htm#fromView=search&amp;page=2&amp;position=2&amp;uuid=369ee894-9731-4812-a058-719586e66c53&amp;query=ai+confused\" target=\"_blank\" rel=\"noopener\">Freepik<\/a><\/p><\/div><\/p>\n<p>The internet seems to have an answer for everything. And even the generative AI models of our time seem to know almost everything. But can that really be true?<\/p>\n<p>Not quite. As powerful and helpful as generative AI is, it is not without its weaknesses. One of these is what are known as AI hallucinations. These are convincing-sounding but factually incorrect or fictitious pieces of content. In this article, we take a closer look at this central problem in the development and application of AI models.<\/p>\n<p><!--more--><\/p>\n<h3><\/h3>\n<h3><span style=\"color: #00549f;\">What Are AI Hallucinations?<\/span><\/h3>\n<p>When AI-generated content appears plausible or correct but deviates from the specified sources, this is referred to as AI hallucinations. In other cases, AI models simply provide incorrect answers. In medicine in particular, such errors can have serious consequences.<\/p>\n<p>&nbsp;<\/p>\n<h3><span style=\"color: #00549f;\">What Causes AI Hallucinations<\/span><\/h3>\n<p>AI developers are constantly working to make language models more efficient and reliable. To achieve this, the models are trained with large amounts of data and continuously optimized. However, errors can arise during this training process, for example due to unsuitable, distorted, or incorrect training data. This significantly limits the accuracy of the responses. AI hallucinations are particularly pronounced when dealing with complex problems. Problems also arise with large amounts of data, as AI models have to filter out the right information and, in some cases, reach their limits.<\/p>\n<p>Errors can also occur in the training methods. When AI models are tested, they are often rated better when they guess than when they admit their ignorance. [1]<\/p>\n<p>&nbsp;<\/p>\n<h3><span style=\"color: #00549f;\">How to Recognize AI Hallucinations?<\/span><\/h3>\n<p>AI hallucinations are often difficult to detect. AI models provide quick and confident answers. If you are not an expert on the topic in question, you usually have no reason to doubt them. In addition, some AI models tend to adapt to your expectations rather than simply providing objective information. [2]<\/p>\n<p>To detect AI hallucinations, you should ask the AI for references. Check the sources or facts mentioned manually, as they could also be hallucinations.<\/p>\n<p>&nbsp;<\/p>\n<h3><span style=\"color: #00549f;\">What Are the Solutions to the Problem?<\/span><\/h3>\n<p>AI hallucinations can mainly be reduced when training the models. Careful data preparation and evaluation methods are crucial here.<\/p>\n<p>However, you can also reduce AI hallucinations with certain settings and instructions. You can tell the AI to simply say \u201cI don&#8217;t know\u201d when it doesn&#8217;t know something. In addition, you can ask the AI to provide the answer step by step.<\/p>\n<p>In general, specific questions and instructions are helpful in avoiding hallucinations.<\/p>\n<hr \/>\n<p>Responsible for the content of this article is <a href=\"https:\/\/www.itc.rwth-aachen.de\/cms\/it-center\/it-center\/profil\/team\/~epvp\/mitarbeiter-campus-\/?gguid=PER-2DF4PAN&amp;allou=1&amp;lidx=1\" target=\"_blank\" rel=\"noopener\">Masimba Koschke<\/a>.<\/p>\n<p>[1] <a href=\"https:\/\/openai.com\/de-DE\/index\/why-language-models-hallucinate\/\" target=\"_blank\" rel=\"noopener\">OpenAI<\/a><\/p>\n<p>[2] <a href=\"https:\/\/www.iese.fraunhofer.de\/blog\/halluzinationen-generative-ki-llm\/#ref2\" target=\"_blank\" rel=\"noopener\">Fraunhofer-Gesellschaft<\/a><\/p>","protected":false},"excerpt":{"rendered":"<p>Sorry, this entry is only available in Deutsch.<\/p>\n","protected":false},"author":6316,"featured_media":23775,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"c2c_always_allow_admin_comments":false,"footnotes":""},"categories":[1514],"tags":[623,712,620,1577,586],"class_list":["post-23772","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ki","tag-artificial-intelligence","tag-fakten","tag-ki","tag-ki-halluzinationen","tag-kuenstliche-intelligenz"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/posts\/23772","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/users\/6316"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/comments?post=23772"}],"version-history":[{"count":8,"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/posts\/23772\/revisions"}],"predecessor-version":[{"id":23785,"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/posts\/23772\/revisions\/23785"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/media\/23775"}],"wp:attachment":[{"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/media?parent=23772"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/categories?post=23772"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.rwth-aachen.de\/itc\/en\/wp-json\/wp\/v2\/tags?post=23772"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}