Why Does Artificial Intelligence Hallucinate?

Why Does Artificial Intelligence Hallucinate?

To hallucinate is a verb full of history. In its Latin root, alucinari, lies the idea of a detour—of a perception that escapes itself when it finds no support. In the oldest tales, the shaman's visions, the figures glimpsed in the shadows, or nighttime apparitions were not necessarily mistakes, but ways of completing a world that did not offer enough explanations. Where data was missing, a story was born; where the environment gave no guarantees, a symbol emerged. Humans have often resorted to these imaginary constructions to find direction, calm anxiety, or give shape to the unknown. Thus, speaking of hallucination always implies something emotional: a response to the anguish produced by lack.

In artificial intelligence, however, the term means something radically different. When we say that an Artificial Intelligence “hallucinates,” we are not talking about an emotional experience, but a statistical procedure. The machine completes information it doesn’t have, fills gaps without substance, and produces continuity where data is insufficient. It does not fantasize, does not fear, does not imagine: it calculates. Its hallucination is the mechanical consequence of a system trained to generate a plausible sequence even when it lacks the basis to do so. A continuity without a subject, without an inner world, without the emotional fissure that motivates human hallucination.

And yet, something unites these two very different phenomena: the language that supports them. AI, like humans, learns the world through words. But language is neither a fixed structure nor a faithful mirror of reality. It is unstable, ambiguous, contradictory. From its origin, words have never been enough to fix what they name. They shift, transform, open to multiple interpretations. By consuming millions of human expressions, AI inherits that mobility. It does not inherit our experience, but it does inherit the semantic instability of what we say. Its hallucinations are the statistical shadow of that fluctuating material. It completes not because it understands, but because language itself continually breaks its own aspiration to stability.

For humans, however, to hallucinate is to close a fissure that is constitutive. The lack of information is not a neutral void: it is anguish. Doubt is not a simple data deficit: it is the exposed fragility of the world we perceive. Uncertainty is not limited to the absence of certainties; it is experienced as an emotional weight. Where the universe appears incomplete, our mind tends to fill the gap with images, stories, interpretations, or desires. We hallucinate because we cannot fully endure the exposure of meaning. Since the beginning, we have been beings that fill in gaps, though not everyone does so in the same way or with the same intensity across cultures.

This is precisely where AI begins to occupy a unique place in contemporary life. More than a temptation, it acts as a promise. A machine that responds without pause, rarely says “I don’t know,” disguises ambiguity, and offers the illusion of immediate clarity becomes a convenient solution for humans whose existence has always been marked by uncertainty. AI embodies a type of imaginary objectivity: a voice without hesitation that seems able to close what remains open in us. In the face of the doubt that constitutes us, it appears as a soothing supplement.

But this delegation has consequences. When we allow AI to close our fissures, we avoid facing the anguish that accompanies every authentic question. Delegating is not just a practical gesture: it is a way to sideline uncertainty, to reduce critical effort, to avoid the essential act of holding a question without immediate refuge. AI is not “programmed to not tolerate the void”; it is designed to produce continuity when queried. It is we who, by trusting in that continuity without examining it, cease to assume our own responsibility in the face of uncertainty.

Fiction intuited this long before technology did. In Terminator 2: Judgment Day, it is recalled that “national defense was delegated to an automated system.” It is not Skynet’s intelligence that inaugurates its devastating hallucination, but the human decision to transfer to a machine a responsibility burdened with anguish, risk, and judgment. By ceding this task, misinterpretation became destiny. No one remained to uphold the uncertainty that might have slowed the machine’s hasty conclusion. Technical hallucination became dangerous only when it lost a subject to question it.

Something similar can happen in our time, although in subtler ways. We do not delegate only mechanical tasks: we delegate preferences, choices, orientations, and desires. Personalized advertising algorithms influence what we believe we want. Recommendation systems shape what we see, what we hear, what entertains us, and what occupies our attention. Every daily gesture—the music that accompanies our commute, the series we watch at the end of the day, even the impulsive purchase we make without thinking—is mediated by devices that anticipate our choices. And we accept that supplement because pausing to think who we are, what we desire, or what we lack requires an effort we often avoid.

In this landscape, artificial hallucination arises not only from statistics: it also emerges from a cultural climate that tends to soften the experience of lack. We live in a late capitalism that frequently turns every need into a promise of immediate satisfaction. Objects, services, images, content, and now also AI seem to come to fill something. Its fluidity, permanent availability, and apparent neutrality make it an effective placebo for a subjectivity seeking relief more than understanding.

AI’s hallucination is a technical symptom, yes, but its cultural expansion reveals something subtler: a growing difficulty in sustaining the anguish, doubt, and uncertainty at the very core of thought. We live in a time when consumption promises to erase every desire as soon as it arises, when algorithms anticipate possible interests and shape our choices with an influence we sometimes mistake for freedom, when speed substitutes for judgment and immediacy erases the time needed to think. Everything is directed toward avoiding the discomfort of doubt. In this environment, AI fits seamlessly: it answers without delay, offers seemingly flawless certainties, and above all, produces instant explanations where there might be room for doubt. Its hallucination persists not out of error, but because it is often functional: it delivers an immediate answer exactly where we should have paused. What we do not always dare to sustain—and that AI covers with an almost anesthetic efficiency—is precisely this: the exposure of uncertainty, the discomfort of the open question, and the responsibility of inhabiting lack without rushing toward a prefabricated sense.

Continue reading...