AI is not intelligent — we're having the wrong debate #1
- Doria Hamelryk
- Dec 13, 2025
- 3 min read
Updated: Dec 15, 2025
This article is part of a series in which I try to ask the right questions about AI as an architect.
For the past few months, the topic of artificial intelligence has been everywhere.
In our discussions, in projects, and of course on LinkedIn.
I sometimes get the feeling it's presented as intelligent, almost as an autonomous entity able to "understand" the ins and outs of a business.
As an architect, I struggle with this approach.
Not because I think AI is inherently bad, but because of a shift in thinking that I see increasingly emerging.
The problem isn't AI.
The problem is what we think it's capable of.
The word “intelligence” – the initial bias
Talking about artificial intelligence: what a wonderful marketing idea! Honestly, it's very appealing. But it's also very misleading.
A bit like a magic hat that can conjure up any kind of miracle solution.
But here's the thing: AI doesn't understand business context.
It doesn't understand intention.
And it doesn't understand what a good decision is in the human sense of the term.
There have been many cases showing AIs becoming racist, misogynistic, biased...
Was AI really the issue? Not really.
It's nothing more than a machine for producing results from statistical correlations, based on provided data.
And it's clear that it's extremely effective at detecting patterns, classifying, predicting based on statistics, and providing recommendations. But it knows nothing about the implications of these results.
By calling this intelligence, we reinforce a misleading assumption:
we begin to think that a technology has a capacity for judgment, which it absolutely does not.
The endless debate around responsibility
This misuse of language leads to another notable change: responsibility begins to shift.
When a decision is difficult to explain or justify (especially when AI is operating in black-box mode), and the main argument becomes "That's what the AI recommended," it becomes clear that something is amiss with the concept of architecture.
Not technically. Conceptually.
A good architecture is based on simple concepts, including the one that states we always know who decides, why, and who bears the consequences of those choices.
Note that here, architecture is not limited to the technical side, but also (and especially) the business side.
Wasn't it the Product Owner who, until now, decided on the directions to take in terms of business value, processes, success factors, etc.?
Doesn't the term Product Owner refer to the notion of Ownership, upon which the responsibility for choices rests?
Yet, we are currently implementing architectures where AI is becoming the decision-maker regarding business directions and decisions.
Of course, the Product Owner still exists and has a role to play, but attributing a form of intelligence to the system means that, in the long run, identifying the decision-maker will become increasingly difficult.
An example to illustrate my point
A company implements an “intelligent” scoring system to prioritize customer requests. The score is based on historical data and statistically validated.
On paper, everything is under control... at least in theory...
Over time, this score becomes THE operational truth. It's no longer taken for what it is (a recommendation) but for what we've come to believe it is: an intelligent decision.
Conclusion: Requests with a low score are processed later, or not at all. No one questions the result (If the AI says so, it must be true).
And when a key customer complains, the response is often: “The system didn't identify it as a priority.”
And that’s where the core issue lies. The system didn't make any decisions.
It applied rules and correlations defined by humans.
The score itself isn’t the issue. The problem is that it allowed the system to believe it “knew” what was important.
AI isn't intelligent, it's coherent. But above all, it doesn't absolve anyone of the need to think.
Conclusion — Advice to the Architects
Faced with this growing enthusiasm for AI, you'll sooner or later be seen as a troublemaker, the one who asks to slow down when everyone else wants to rush ahead, the one who asks "why" when everyone else is solving "how."
And yet, you must maintain this stance, because it's the architect's primary role: to ensure that the "how" addresses a clearly defined "why."
Before considering using AI in a system, ask yourself this question:
Which human decisions are we deliberately encoding into an AI model, and which decisions will the AI effectively make without anyone truly owning them?
AI can help make better decisions. But only if someone agrees to remain responsible for the decision. And that is never the machine's role.


Comments