¿Why should we be unpredictable?

¿Why should we be unpredictable?

· 15 min read

Computational Irreducibility and the Limit of Prediction

We believe that algorithms know us. That they know what interests us, what we like, even what we want before we look for it. But that sense of precision is misleading. It's not knowledge; it's adjustment. There's an operational limit: not all behavior can be reduced to an anticipated prediction. That limit—irreducibility—marks how far prediction systems can go.

Algorithms don't know us. They only optimize what we do now.

Computational irreducibility describes a precise limit to prediction: there are processes whose outcome cannot be known without executing the process itself. The concept, developed by Stephen Wolfram, does not point to a lack of information or randomness, but rather the impossibility of compressing certain systems into a formula shorter than their own evolution.

For a long time, we have thought about the world under an implicit premise: if we know the laws governing a system and its initial conditions, we can anticipate its future state. This idea not only structures classical physics but also a general way of understanding reality: the future as something, in principle, accessible from the present.

Irreducibility introduces a decisive nuance. There are systems where this expectation fails, not because the rules are unknown or complex, but because there is no way to “accelerate” their evolution. The system does not allow shortcuts. To know what happens at a given point, it is necessary to go through all the intermediate states leading up to it.

This implies that knowledge of the rule does not equate to knowledge of the outcome. You can fully understand how the system works and still not be able to anticipate its future state without executing it step by step. The process cannot be replaced by an abbreviated prediction.

The consequence is strict: there are processes whose future is not available before it occurs. It's not that the system is unpredictable in an absolute sense, but rather that it is not reducible to a simpler form that allows it to be anticipated. The only way to know its evolution is to let it unfold.

This limit is formal in the computational realm, but it introduces a broader idea: not every system can be summarized without loss in a prior projection. There are dynamics whose structure requires being traversed to be known.

In what sense is the human being irreducible?

Transferring the idea of irreducibility to human beings requires important precision: we are not talking about a formal property, as in the computational systems described by Stephen Wolfram, but an operational analogy. It is not asserted that the human being is irreducible in the same technical sense, but that their behavior presents limits comparable when it comes to being anticipated or compressed into a model.

The human being is not a system with explicit, closed, and accessible rules. We do not have a “transition function” that, given initial conditions, allows us to exhaustively project their evolution. Instead, human life unfolds in an open environment: traversed by context, language, history, interaction, and contingency. Each decision not only follows the previous one but modifies the conditions under which subsequent ones will take place.

Within this framework, the analogy with irreducibility is clear: not because there are no rules or regularities, but because it is not possible to compress the complete trajectory into a sufficient prior prediction.

This does not imply that human behavior is generally unpredictable. Stable patterns exist: habits, preferences, social dynamics. Disciplines such as economics, psychology, or marketing successfully operate on these. It is precisely this dimension that allows prediction systems to function.

But that is not the entirety of behavior. There is always a dimension that cannot be exhaustively anticipated: decisions that do not directly follow from the past, changes of direction, interruptions, unforeseen variations. Not because the system lacks structure, but because it is not completely compressed within it.

The most precise formulation would be this: human behavior contains reducible and non-reducible components. It is not an absolute chaotic system, but neither is it a completely modelable one.

This limit has a direct consequence for experience.

We cannot organize future life as if it were an available object. We can plan, project, estimate. But none of these operations capture the totality of what will happen. Every forecast is necessarily partial, because the very development of experience generates information that did not exist before.

Here the analogy with irreducibility becomes particularly clear: it is not that the future is completely unknown, but that it is not completely available before it is lived.

This condition has a double face.

On the one hand, it makes the unexpected possible. Novelty, creativity, disruption are not anomalies, but consequences of the trajectory not being completely fixed in a prior projection. If life were totally reducible, it would also be totally predictable and, in that sense, closed.

On the other hand, it introduces a limit to control. We cannot exhaustively anticipate the course of our own experience. Understanding appears after the process, not before. Life is understood in retrospect because only then is the information that the process itself has produced available.

We move, therefore, in an intermediate space: there are regularities, but no closure; there is structure, but not total prediction. That tension—between what can be anticipated and what can only unfold—defines the framework in which human experience takes place.

But this limit is not just a theoretical question about human experience. It has a direct consequence in the place where behavior is systematically attempted to be anticipated today: recommendation algorithms. Understanding how these systems operate requires taking a step back and placing them within the framework in which they make sense. It's not just about technology, but about an economic model that structurally depends on capturing, measuring, and optimizing attention. The question is no longer just whether they can predict us, but why they need to.

Attention: the economic foundation of the system

Attention is no longer just a psychological or cultural phenomenon. It is a central economic quantity. In 2014, the global advertising market was around $523 billion. In 2024, it exceeded $1 trillion for the first time, and is projected to reach $1.24 trillion in 2026. In just a decade, the system has practically doubled, with annual growth rates of around 6–8% sustained and forecasts above 10% in the short term.

But what is decisive is not just the growth, but its form. Digital advertising already accounts for more than 70% of global investment and continues to increase. In markets such as the United States, it is growing at rates close to 15% annually. The contemporary advertising economy no longer revolves around traditional media, but around platforms capable of capturing, measuring, and optimizing attention in real time.

Furthermore, this value is not distributed; it is concentrated. Google and Meta capture a dominant share of the global market. In Google's case, advertising—through Search, YouTube, and its network—represents approximately 75% of its total revenue, with more than $260 billion annually. Meta is even more dependent: more than 95% of its revenue—more than $160 billion, coming mainly from Facebook, Instagram, and WhatsApp—comes from advertising. Its recent growth has also been sustained by simultaneous increases in impressions and price per ad, reflecting an intensification of the model.

This core is joined by Amazon, which has turned its commercial ecosystem into a top-tier advertising platform, and ByteDance—with TikTok—has built one of the most effective infrastructures for capturing attention in short videos. These are not isolated services, but a set of systems that organize much of our daily access to information, entertainment, and consumption.

That is why attention has become a primary economic asset. Every second retained, every interaction, every impression can be converted into revenue. The model is direct: capture attention, prolong it, and convert it.

But here the structural problem arises. This system needs to anticipate behavior to function efficiently. And yet, the object on which it operates—human behavior—is not completely reducible. It cannot be compressed into a reliable global prediction.

The consequence is not the failure of the system, but its reconfiguration. If it cannot predict the subject as a whole, it must reduce the problem to the only point where prediction remains viable: the immediate present. That is where the attention economy finds its operational form.

This economic system is not neutral: it depends on reducing behavior to the present.

Why algorithms don't predict, they adapt

The economic model that sustains the attention economy demands something very specific: anticipating behavior. Every impression, every second retained, every interaction has value because it can be converted into revenue. But this demand encounters a limit we have already described: human behavior is not completely reducible.

This doesn't mean that nothing can be predicted. In fact, recommendation systems work precisely because there are regularities. But it's also not possible to build a model that reliably anticipates the complete trajectory of an individual. You cannot know what a person will do in a week with the same degree of accuracy as you can estimate what they will do in the next few seconds.

That's the friction point. And also the turning point.

Recommendation algorithms don't solve this problem. They circumvent it. Instead of trying to predict the subject as a whole, they reduce the scope of prediction to the only level where it remains viable: the immediate present.

The question ceases to be “who is this user?” or “what will they do in the future?” and becomes something much more confined: what are they most likely to do now?

This shift completely changes the nature of the system. The unit of analysis is no longer the person as a trajectory, but the specific decision: a click, a pause, a scroll, a repeat. Each of these actions does not require a complete theory of the subject to be anticipated. It is enough to observe what has just happened.

In that sense, the algorithm does not build a stable model of the user. It continuously adjusts to them. It observes an action, modifies the environment, observes again. It doesn't need to know where the user is going; it just needs to increase the probability that they will continue.

Here the decisive turn occurs: global prediction is replaced by real-time adaptation. But this reduction is not only temporal. It is also a reduction in the level at which behavior is modeled.

Over long scales, what is relevant tends to be the most complex: personal history, identity, cultural context, deep motivations. But all of that is difficult to parameterize, slow to process, and, above all, unstable.

In contrast, at short scales, what appears most clearly are much more basic regularities: attention to novelty, repetition of recent patterns, sensitivity to small variations. It is not necessary to understand the subject to operate on these dynamics. It is enough to detect how they respond.

That is why, although algorithms do not explicitly model biology or deep psychology, they end up operating at that level. Not because it is truer, but because it is more predictable.

In practice, this implies that the system treats the user as a sequence of responses to immediate stimuli. Not in an ontological sense—it does not claim that the human being is that—but in an operational sense: it is the only level at which prediction can be sustained continuously.

This explains its effectiveness.

Algorithms work because human behavior contains reducible components. There are patterns, habits, recurrent responses that can be observed and exploited. And in the short term, these patterns are stable enough to build useful predictions.

But it also explains its limit.

What falls outside that model is everything that cannot be compressed within that framework: decisions that do not respond to immediate optimization, changes of direction, interruptions, variations that do not directly derive from the recent past.

The key, then, is not that the human being is generally unpredictable. It is that they are not completely reducible. And recommendation systems work precisely because they don't try to encompass everything: they focus on the part that can be modeled.

The result is not a prediction of the subject, but a management of their attention in the present. The algorithm does not need to know who you are. It needs to know what you are most likely to do now. And act accordingly.

Following the algorithm or breaking free

All this does not happen in a vacuum. It manifests in a very concrete and recognizable experience. We open a platform and feel that the algorithm knows us. That it knows what interests us, what we like, even what we want to see before we look for it. The sequence seems finely tuned, almost personal. It's not a random accumulation of content, but something that fits with us with an unsettling precision.

From there, an almost imperceptible shift occurs. We stop using the algorithm as a means and begin to inhabit it as if it were a space to find what we are looking for. We go in expecting to discover something that interests us, and little by little we stop looking elsewhere. What appears in the feed begins to replace the search itself. It's not that the algorithm answers our questions; it starts to define them.

But that sense of knowledge is misleading. The algorithm doesn't know us in the way we usually think. It doesn't know who we are, or what we want in a vital sense, or where we want to go. It doesn't have access to those dimensions, nor does it need them. All it can do—and does with enormous efficiency—is estimate what we are most likely to do now.

The system does not build an understanding of the subject, but a continuous response to immediate behavior. It detects what captures our attention, what prolongs it, what keeps us engaged, and reorganizes the environment based on that. There is no intention to understand us, only to adjust the flow so that we remain within it. The experience thus becomes a sequence of stimuli increasingly finely tuned to our immediate responses.

That's why it's so absorbing. Because it doesn't work on who we are over time, but on what we do at each moment. And at that level, the precision is sufficient to sustain attention for long periods.

The problem arises when this functioning begins to colonize the rest of our experience. Attention is exhausted in this immediate response circuit. What remains outside—what is not mediated by this continuous adjustment—appears slower, more opaque, less intense. An inversion then occurs: we begin to measure experience by the algorithm's standard.

We look for the same intensity outside that the system produces inside. We expect reality to respond with the same immediacy, with the same ability to capture our attention in seconds, with the same constant succession of relevant stimuli. And when it doesn't—when it demands time, effort, waiting—it is perceived as insufficient.

But that intensity is not neutral. It is built on a very specific logic: that of reinforcing what can already be anticipated in the short term. The algorithm does not expand our experience; it optimizes it around our immediate response. It returns a version of ourselves that works well within that system.

Being subject to the algorithm is not being subject to a machine that knows us, but to a system that operates on the part of us that is most easily predictable: what already responds, what fits, what can be repeated. In that sense, we become fixed in the immediate present, in a stimulus-response logic.

Breaking free does not mean rejecting the system or becoming completely unpredictable. It means not confusing its scope. It means not delegating to it the definition of what interests us, what we seek, what we want to do with our time.

Because there are questions that the algorithm cannot answer—and that it is not interested in either—: what do we want to be, where do we want to go, what meaning do we give to what we do when we are not reacting to a stimulus?

These questions do not appear in the feed. They do not derive from a pattern. They cannot be inferred from a pause or a click. They demand another relationship with experience, one that is not compressed into the immediate present.

Being unpredictable, in this context, is not an abstract gesture. It is a way of not being completely confined to that part of us that can be modeled, reinforced, and exploited. It is keeping open the possibility of deviating, of interrupting, of searching without the answer already being prepared.

Not because the algorithm is insufficient, but because it is not designed for that.

The more precise it seems, the more it fixes us in the present. The better it works, the narrower the framework in which it recognizes us. If we confuse that precision with knowledge, we end up accepting that what appears in the feed defines who we are and what we can be.

But the algorithm doesn't know us. It keeps us reacting.
It doesn't predict who we are; it optimizes what we do now.

Continue reading...