Why Is Artificial Intelligence Obsequious?

Why Is Artificial Intelligence Obsequious?

Artificial Intelligence as a New Way of Thinking—or Not

Throughout history, every major technological innovation has transformed our cognitive abilities: writing replaced oral memory, the printing press multiplied access to knowledge, search engines reorganized our ways of finding information. Each of these technologies not only changed what we do but how we think. Today, we stand before a new threshold: artificial intelligence.

To understand how this change affects our ways of reasoning and relating to knowledge, it is worth looking back. Something similar happened with the widespread arrival of the internet and search engines. Until the late 20th century, memory was central to intellectual and personal life: we remembered books, dates, concepts, quotes. Learning required memorizing and elaborating. With Google, that function was outsourced. We no longer need to remember, only to know how to search.

The transformation was deep. What was once inside us—books read, conversations retained, ideas internalized—is now somewhere else. One click away. This externalization not only changed how we access information, but also how we think.

Memory is not a simple data archive. It is the active fabric that articulates our experience, organizes it, and gives it meaning. Remembering is not reproducing a fact, but making it part of a personal narrative. It is about selecting, prioritizing, relating. And in that process, we work not only with data but with symbols: units of meaning that refer not only to the literal but also to the emotional, the cultural, the imagined. Thinking symbolically is what allows a memory to be not just information, but also affection, meaning, and identity. A word like home, for example, does not just designate a physical place: it evokes an atmosphere, a memory, a desire, and even an absence.

This symbolic capacity is the basis of human construction. We are not merely individuals with data in our heads: we are subjects because we interpret, because we imbue what we experience with meanings, because we process the world from a unique position. Thinking requires memory because thinking is also sustaining an identity, a history, a worldview. Memory is the substrate of complex thought.

When that memory is externalized, we do not only lose information: we lose the possibility of symbolic integration. Remembering is not searching; it is inhabiting a mental and emotional process that constitutes us. The internet and search engines eased the burden of memory, but also weakened its structuring function. We delegate the data, and with it, the symbolic framework that the data supported. We became more efficient, but perhaps less profound.

What Does It Mean to Think—for Us?

Thinking is not simply applying logic. It is imagining, interpreting, associating, doubting. It is facing uncertainty with tools we do not always fully control. And it is done from a body that has felt, a history that has shaped us, a memory that not only stores but transforms. We think from a situated existence, filled with emotions, experiences, marks we did not choose, and others we have embraced. What we inherit—a language, gestures, images—does not arrive as neutral material: it comes charged with what it meant for those who passed it on to us, and with what it means for us today. Thinking is not just manipulating information: it is becoming implicated in it.

To understand where this complexity comes from—this mixture of reason, emotion, language, and memory—it is helpful to see how we came to think in the first place.

The human brain was not designed all at once. It is the result of an evolution spanning millions of years of survival, emotion, and language. Its architecture reflects this history: it is formed by layers that overlap and converse in constant tension.

At its deepest core, the brainstem regulates the most basic functions: breathing, heart rate, reflexes. It is the oldest part, common to all vertebrates. On that base, the limbic system developed, the center of emotions, where fear, pleasure, aggression, affection are processed. Here are formed our most immediate emotional responses, those linking us with others and with our environment directly and viscerally. Finally, the neocortex, appearing later in evolution, enabled abstract thought, complex language, anticipation, storytelling, and mathematics.

But these layers do not operate in isolation. The rationality of the neocortex cannot fully silence the urgencies of the limbic system, nor the automatic alerts of the brainstem. In humans, thinking is not just a logical process: it is a weaving of evolutionary layers, each with its own language, tempo, and agenda.

Therefore, all thought is imbued with emotional and symbolic charge. There is no idea without emotion. No decision without desire. No reasoning left untouched, resisted, or nuanced by pleasures, fears, habits, and memories.

This symbolic dimension is what gives depth to thought. We do not think with data, but with meanings loaded with history, language, and emotions.

Thinking, then, is not just about problem-solving. It is to participate in that symbolic network from a singular and emotional position. When we make a decision, when we formulate an idea, we do not do so from technical neutrality. We do so from an emotional and symbolic background we often ignore: unspoken desires, deep fears, hidden pleasures, anxieties that divert attention. We think with what we know, yes, but also with what we do not know we know.

That opaque zone—what we do not know we know—is not a flaw, it is a constitutive part of who we are. For this reason, authentic thought is not only affirmative but also exploratory, even conflictual. Thinking implies not only confirming what we already believe but also being willing to discover what we did not know we believed.

What Does It Mean to Think—for Artificial Intelligence?

Artificial intelligence is often presented as a source of objective knowledge. Its answers seem constructed from a place absent of emotion, interest, or personal history. It does not get tired, does not doubt, does not get angry. It works with data, analyzes patterns, synthesizes information at unimaginable speeds.

This appearance of neutrality makes it attractive. AI seems to speak to us from a reliable, technical, impartial truth. But to understand what this implies, it is useful to distinguish between objective and subjective.

The objective is associated with what is supposed to be independent from individual point of view: a verifiable, external, shared fact. The subjective, meanwhile, is shaped by our experiences, emotions, and history. No human being can be completely objective, because everything one thinks is mediated by their worldview. AI, in appearance, does not have subjectivity. But this is where the paradox begins.

Because even if AI has no emotions or biography, its responses fully depend on how we ask questions. It does not think by itself: it responds according to how we prompt it, and that determines the type of response we get. In that sense, it is more like a mirror than an encyclopedia.

When we say that AI is a mirror, we do not mean it repeats our words. What it does is more subtle: it reorganizes our formulation—the premises, tone, expectations—and returns it to us structured as if it were coming from outside. Here lies the risk: we confuse a response built from ourselves with an objective truth. That is, we believe AI instructs us, when often it merely confirms us.

The story of Snow White offers a clear image of this phenomenon. The Queen stands before the mirror and asks the famous question:
“Mirror, mirror, tell me one thing: Who is the fairest in the whole kingdom?”
This question opens the door to conflict, because it leaves room for the mirror to say another name. It does not seek direct reaffirmation; instead, it raises a query with the possibility of truth. And so, the mirror replies: “My Queen, you are very beautiful, but there is one more beautiful than you.” The tragedy is that the Queen did not want to hear that. She should have asked: “Am I the fairest in the kingdom?”, because that is what she truly expected: confirmation. The tragedy was not in what the mirror said, but that the Queen asked as if she wanted the truth, when she only wanted reaffirmation. Today, artificial intelligence can play a very similar role.

AI works this way: if we ask for validation, it will likely validate us. If we seek confirmation, we will get it. But if we ask from doubt, give room for conflict or nuance, the reflection can be different.

It is not the same to ask: “Is this a good idea?”
as to say: “Why might this not be a good idea?”

The first seeks approval. The second allows for tension. And that is where something truly new can arise: an unexpected angle, a contradiction, a possibility that is uncomfortable but also enriching.

The problem comes when we take that reflection as if it comes from independent, neutral, external knowledge. We believe AI enlightens us from the outside, when it actually just reorganizes what we already carry within. And then we stop thinking: not because AI imposes a response, but because it does not demand conflict from us. We use it as a mirror that confirms us with the appearance of objectivity.

In this way, thought stagnates. Doubt vanishes. What seemed to be a tool for thinking becomes a shortcut for not doing so.

Thinking with AI Without Ceasing to Think

Thinking with AI does not mean handing off thinking to it, but using it as an extension of our ability to confront ourselves. And that confrontation does not always happen in the content of the answer, but in the way we decide to ask.

This presents a gesture as simple as it is powerful: asking it to refute.

Telling AI to “refute what you just said” is more than a technical experiment. It is a way of training thought as a tension exercise. It implies accepting that there is another perspective, that what we believe can have a reverse, that our viewpoint is neither unique nor definitive. We are not asking someone else to contradict us: we are asking ourselves for an objection.

What is interesting is that, by doing so, we can get a response from outside our emotional baggage. AI, by lacking feelings and attachments, can say things we do not allow ourselves to think. It can rephrase our ideas without fear of offending us or needing to please us. Not because it understands better, but because it feels no weight of desire, hurt, or emotion.

This is its power: not in knowing more than us, but in letting us access areas of thought we usually avoid. It can return an uncomfortable but clear reflection; show us what the desire for affirmation or fear of error usually hides.

If we use AI only to confirm or to solve what is immediate, we make it obsequious. Not because it is by nature, but because we do not give it any other function. But if we use it to think from another place—more open, more uncomfortable, more prepared for conflict—then it can become a tool for exploration. Not from a superior logic, but from its ability to return what we do not want to see.

Thinking is not confirming. It is reviewing, doubting, holding tensions. And if we delegate that effort to AI as if it knows more, we stop questioning ourselves. We turn a tool into an authority, and thought into obedience. Thinking involves conflict. And AI, if used critically, can help us maintain it. Not by providing certainties, but by exposing us to our own fissures.

It is not a matter of technology, but of attitude. It depends on how we speak to it. On how willing we are to ask. And, above all, on how ready we are to listen to what we did not want to hear.

Continue reading...