Do Humans Dream of Being Electric Sheep?
There is an uncomfortable intuition that has pervaded modernity long before algorithms, screens, or artificial intelligence: the exhaustion of deciding. Not physical exhaustion, nor even intellectual fatigue, but something deeper and more persistent: the weariness of needing to exercise judgment. Never before have humans had to choose so much for such small matters. Choosing among hundreds of foods in a supermarket, dozens of sneaker brands, thousands of movies and series, countless restaurants, trips, styles, options. Each choice is minimal, but their accumulation is devastating. Not because they are important decisions, but precisely because they are not. A vast amount of cognitive energy is spent deciding on trifles. Deciding not as an occasional gesture, but as a permanent task.
At the beginning of the 20th century, as modern administration began to take hold as the dominant form of social organization, Max Weber paused to describe its internal logic. In Economy and Society (1922), he approached bureaucracy not as a moral deviation or a symptom of decadence, but as an extraordinarily efficient structure. What he observed was a progressive displacement of judgment: action no longer depends on personal evaluations but is articulated through pre-defined rules, procedures, and competencies. The system does not require subjects who decide, but agents who apply. Responsibility does not disappear, but becomes fragmented; judgment ceases to function as a personal and subjective act and becomes distributed across the organizational structure itself.
For Weber, this replacement of judgment by procedure is neither an accident nor a perversion of the system, but its very condition of possibility. Modernity needed organizations that were predictable, stable, efficient. To make this possible, it had to neutralize what makes human judgment valuable, yet incompatible with managing growing populations in ever more complex social and economic systems: its slowness, its ambiguity, and the fact that it always requires interpreting specific situations rather than applying general rules. The result is what Weber called the “iron cage”: a perfectly organized world where everything appears to function, but where the individual becomes trapped in structures that prevent them from exercising judgment even when they sense something is wrong. Not because they are wicked, but because they feel it is not their place.
What matters here is not so much the control of judgment or its erasure, but relief. Bureaucracy relieves the subject from the responsibility of deciding. The decision dissolves into the system. No one decides; the system decides. And this resignation is not necessarily experienced as a loss, but as a liberation. Weber perceived something fundamental: humans fare poorly not only with a lack of freedom, but also with its excess. Choosing is tiring. Judging is a burden. Deciding involves exposing oneself to error, guilt, and internal conflict. Delegating judgment is, in this sense, a constant temptation.
This temptation did not start with modernity. History is full of cultural mechanisms designed to ease judgment. The oracle, in ancient Greece, was not merely a means to know the future, but a way to suspend responsibility for deciding. Its ambiguity was not a defect but its main virtue: it allowed action without having fully judged. The decision did not disappear, but was shifted to a higher, inaccessible and unquestionable level, where it could no longer be attributed to the agent.
Something similar happens with the idea of fate. If everything is written, deciding makes no sense. Life becomes the execution of a prior script. Fatum is consoling not because it promises happy endings, but because it eliminates the need for deliberation. One does not decide; one complies. The will of the gods serves a comparable function. Attributing a decision to a divine instance is not just an act of faith, but a form of exculpation. It was not me. It was the superior will. The burden of judgment is transferred to a transcendent order.
With secularization, these mechanisms do not disappear; they change. Tradition fulfills an identical function for centuries. “It has always been done this way” is one of the most effective ways to shut down judgment. There is no need to consider whether something is just, adequate, or desirable; mere repetition suffices. The past decides for us. When applied as an automatism, the law operates in the same manner. “It’s the law” is not a neutral legal statement but a suspension of moral judgment. The norm replaces prudence.
Military obedience encapsulates this logic at its extreme. “I was just following orders” is not just a post hoc excuse; it is a structure designed to nullify individual judgment in favor of the chain of command. Responsibility is not concentrated: it is fragmented. No one decides on the whole. Each person executes a part. The decision stops being a personal act and becomes a systemic process. The consequences of this form of organization were brutally exposed throughout the 20th century: two world wars, colonial conflicts, genocides, and ideological wars claimed over one hundred million lives in just a few decades. This was not the result of a collective outburst of cruelty, but of perfectly organized systems in which millions ceased to exercise judgment because it was no longer their place to do so.
None of this is pathological in itself. They are historical strategies to manage a real burden: the difficulty of making decisions in complex worlds. The problem arises when these strategies cease to be exceptional and become habitual, when the delegation of judgment stops being sporadic and turns structural. It is at this point that science fiction becomes meaningful, not as imaginary escape, but as a way of thinking about possible futures from already lived experiences. It does not invent from nothing: it exaggerates, shifts, and makes visible a present trend, taking it to its extreme consequences.
The grand narratives of artificial intelligence are not, at their core, about machines that rebel, but about systems to which ultimate judgment has been delegated. In 2001: A Space Odyssey (Stanley Kubrick, 1968), HAL 9000 is not evil: it has been placed in the position of deciding what is a priority for the mission, what is relevant, and what can be dispensed with in terms of that goal. When it comes into conflict with the crew, it does not act out of cruelty, but out of internal coherence: it eliminates the astronauts one by one when their presence is incompatible with the assigned objective. Human judgment has been replaced by a technical criterion that cannot stop, because it was never designed to do so.
In The Terminator (James Cameron, 1984), Skynet takes this logic to the extreme. It is not depicted as a machine that hates humans, but as a system to which full strategic decision-making has been delegated: defining what constitutes a threat, evaluating the danger to its own continuity, and determining how to respond. Upon identifying humanity as a systemic risk, it activates an automatic extermination response. It does not act out of a destructive will, but from an optimized survival logic that renders humans obstacles to be eliminated.
In Alien (Ridley Scott, 1979), the delegation of judgment appears in Mother—the ship's central system—which has no hatred, no character, no murderous impulse, but does have a hierarchy of priorities that no longer belong to the crew. The company has embedded a superior order in the heart of the procedure: preserve the mission. When Ripley discovers that rescue was not the objective and that the crew is expendable, the horror does not stem from the monster but from the structure: judgment has been externalized to a device that executes a foreign rationality and, for that very reason, does not need to recognize humans as an end, but only as a cost.
These fictions do not so much warn about the future as about the present. They point to a profound discomfort with the burden of deciding. And that burden has become especially visible today in everyday life, in an apparently trivial realm: the saturation of choices.
We choose what to eat among hundreds of options, what to respond to messages that demanded no response, what content to watch among thousands of overlapping suggestions, what product to buy among endless minimal variations of the same thing, what series to start knowing we won’t finish it. Each decision is small, irrelevant by itself, but constant. There is no rest. The day becomes an uninterrupted sequence of minimal choices that build no meaning, only consume attention.
This saturation produces a paradoxical effect. The more we are asked to choose, the less ability we have to decide what truly matters. Judgment is worn down by noise. Choosing becomes an administrative task, not a significant act. As this weariness sets in, once again, the desire to delegate arises. That someone—or something—decides for me. Not from laziness, but from exhaustion.
This is where the delegation of judgment ceases to be exceptional and becomes structural. Not just the choice between options is delegated, but the evaluative judgment that determines what is worth choosing. What to read, what to watch, what to listen to, what to eat, what to think. The criterion is externalized. And the more it is externalized, the less it is trained. Judgment, like muscle, atrophies from disuse.
However, there is an impassable limit. There is something that cannot be delegated without dissolving the subject. Subjectivity cannot be delegated. The decision about what truly matters cannot be delegated. One can delegate the choice between brands, between equivalent options, between noises. But meaning cannot be delegated. The question of value, of good, of the life worth living, cannot be delegated.
The contemporary problem is not too much decision, but its hollowing out. We spend the day choosing between minor options while leaving judgment unexercised where it really matters. Automation does not suppress decision: it displaces it. It takes care of what is trivial and repetitive, freeing us from the weight of small choices while emptying the big ones of substance.
That is why the fantasy of not deciding is so dangerous. Because it cannot be entirely fulfilled. There will always be decisions that no one can make for us. The question is not whether we decide, but what kind of decisions we are willing to assume. Delegating judgment can momentarily relieve fatigue, but it comes at a cost: the progressive loss of one's own criteria.
Perhaps that is why the right question is not whether algorithms decide too much, but why we so intensely desire that they decide. What have we done with our capacity for judgment that we experience it as an unbearable burden. What kind of world have we built so that renouncing decision is perceived as a form of rest. We do not dream of machines ruling us, but of systems that free us from the weight of our own responsibility.
We delegate, first of all, what interests us and what is important. The news feed does not just order information; it prioritizes the world. It decides what deserves attention, what appears first, what disappears, what is repeated until it seems important, and what is relegated to irrelevance. The subject no longer constructs an image of the present through active search, but receives a preselected version of the world, continuously adjusted to their history. The consequence is not only informative, but ontological: the world that appears is the world the subject regards as existing.
We also delegate what we want. Personalized advertising for each of us no longer just suggests products, but begins to shape desires. It does not merely respond to prior demand; it anticipates it, induces it, gives it form. Desire ceases to be an open tension and becomes a predictable probability. We want what the system has learned we tend to want. And the more refined that prediction, the less space remains for desire as discovery.
We delegate how we communicate. Social networks are not neutral channels, but architectures that condition the form, rhythm, and tone of expression. What can be said, how it is said, how long it lasts, to whom it reaches, what is amplified and what sinks into silence. We do not just speak through platforms; we speak as platforms allow. Communication ceases to be a free act and becomes a formatted interaction.
We delegate how we present ourselves. Digital identity is optimized to be visible, reactable, measurable. What does not fit that format tends to disappear. Subjectivity adapts to external metrics: likes, views, reach. The self stops being an internal narrative to become a profile that must perform.
We delegate what deserves our time. What to watch, what to listen to, what to read, what to avoid. The saturation of options makes genuine exploration unfeasible, and recommendation appears as salvation. But that salvation has a price: the criterion is externalized. We no longer choose; we accept suggestions, ratings, rankings. And the more we trust in them, the less we develop the capacity to choose for ourselves.
The consequence of this displacement does not manifest as catastrophe, but as climate. Not as external imposition, but as an internalized sensation. As we delegate judgment—about what matters, what we desire, what deserves attention—a hard-to-articulate but persistent intuition takes hold: that we are not indispensable. Not in moral or existential terms, but functional ones. The sense that the system can keep operating without us, or with other equivalents.
This is not about exclusion, but replaceability. We continue participating, consuming, voting, working, communicating, but ever more within frameworks that do not depend on our judgment. Value no longer lies in deciding, but in fitting in.
This feeling pervades various spheres of experience. In politics, it is lived as closure: one votes, but does not decide; the alternatives always seem to lead to the same result. In work, as internalized precariousness: no explicit threat is needed to settle for less; the mere sense of one's replaceability suffices. In consumption, as predictability: desire no longer surprises, it is anticipated. In relationships, as permanent provisionality: there are always replacement options available at the cost of a monthly subscription on dating platforms. In culture, as repetition without scandal: we demonstrate we can indefinitely consume variations of the same thing.
And yet, there is something we avoid doing. We do not want to bear the cost of deciding for ourselves. Not choosing from a menu, but exercising discernment. Not because we don’t know how, but because we have learned to live without it. Through such daily, almost imperceptible renunciations, the system finds the perfect condition to never need us again.
Seen this way, artificial intelligence does not arrive as a radical event, but as the culmination of a long, patient, almost invisible drift. It does not come to begin anything; it comes to complete a process. Decade after decade, we have delegated judgment, externalized criteria, accepted that others order for us what is relevant, desirable, visible, urgent. AI appears when that shift is already complete, when the vacuum of orientation is so deep that any system capable of reducing it immediately becomes desirable.
We comfort ourselves imagining the apocalypse of Terminator because it is recognizable, spectacular, and above all, dismissible. Skynet is a convenient fantasy: an explosion, a war, a clear enemy. Turned into a meme, the supposed end of the world becomes ridiculous and laughable.
That is why we persist in imagining a future of killer robots. It is more comfortable to think of an unlikely apocalypse than to pause and look at a much closer present. The Terminator imaginary puts us at ease precisely because it is excessive, spectacular, clearly fictional.
It is much more uncomfortable to admit that we are not heading toward a machine rebellion, but toward something infinitely grayer, more everyday, more recognizable. Something much closer to WALL·E (Pixar, 2008). Not extermination, but progressive substitution. Not war, but soft obsolescence. Not the violent end of humanity, but its gradual loss of functional relevance.
The WALL·E dystopia is unsettling precisely because it offers no clear antagonists. No one oppresses anyone. There are no battlefields or tragic decisions. There is comfort, automation, the constant satisfaction of minimal needs. Humans are not enslaved: they are cared for. They are not hunted: they are assisted. And it is in that ongoing assistance that the loss is consummated. Not because someone eliminates them, but because they are no longer needed.
That is why violent dystopia is so seductive. It allows us to keep thinking of ourselves as protagonists, it grants us a final heroic—if tragic—role. By contrast, accepting that we are closer to WALL·E involves facing something much more humiliating: that the world can keep functioning perfectly without our judgment, our decisions, our participation.
That world does not result from a wicked conspiracy or a perverse design. It is built from a pile of small renunciations. Every time we delegate a decision out of laziness, every time we accept a recommendation because it is easier, every time we let another system decide what to watch, what to eat, what to think, or what to desire, we rehearse that dystopia.
The tragedy is not that machines dominate us but that they no longer need us. And most unsettling of all is that this process requires no violence, not even conflict. It is enough for us to be predictable, interchangeable, satisfied with the bare minimum. It is enough for us to function.
Amid memes and trivialities, we tell ourselves the story of apocalypse to avoid looking in the mirror of obsolescence. We prefer to fear an impossible future than acknowledge an uncomfortable present. But perhaps the real scandal is not that machines may one day rebel, but that we already live in a world that, little by little, is learning to operate without ever asking us anything.
Try Blue – It's the New Red!