Automating doesn't necessarily mean improving #3
- Doria Hamelryk
- Dec 22, 2025
- 4 min read
This article is part of a series in which I try to ask the right questions about AI as an architect.
In many discussions about AI, one main idea emerges almost each time: if a process is slow, expensive, or tricky, it should be automated.
As an architect, I've learned to be cautious of the equation:
automation = improvement.
It's simple, but it can hide a much more complex reality.
What a process really tells us
A business process never exists by chance. Even when it is not optimal, it is the result of:
historical constraints,
organizational trade-offs,
and sometimes implicit decisions.
It's easy to criticize the situation at a given moment, saying that things would never have been implemented this way. However, there are many factors that explain the current implementation.
That's why automating a process without understanding it doesn't simplify it. It freezes its assumptions.
And unfortunately, these hypotheses are rarely documented.
The trap of apparent efficiency
AI is very effective at automating many tasks like:
repetitive decisions
prioritizations
classifications
recommendations
At a first glance, benefits appear immediate:
less human intervention
reduced processing time
less risk of error
But what we gain in operational efficiency, we may lose in capacity for evolution/adaptation.
When dealing with "classic" automation, we generally had control over all the rules and logic defined.
With AI-based automation, the situation is different: the system establishes correlations that are not always explicitly formulated, nor entirely understandable.
This isn't necessarily a weakness, but it is a major change from an architectural point of view.
And an automated process that "works" will very quickly be considered the truth. Even when the first side effects appear (unforeseen exceptions, missing logic, etc.), there are few situation where the time will be taken to review the process.
A concrete example
Let's take a very common example.
A support team (client service) implements AI to automatically classify incoming tickets and determine their routing.
The goal is clear: to relieve the teams by offering them better structuring of their daily work.
At first, everything goes smoothly. Simple tickets are processed quickly. Response times improve.
But over time, it becomes apparent that complex or borderline cases are poorly classified. They don't fit into the predefined categories.
It's not so much that the rules are incomplete, but rather that the model has learned statistical correlations that work very well in most cases (even if we can't always explain why).
So, as long as reality remains close to the training data, the system seems relevant. As soon as we deviate from it, the limitations appear—often where the stakes are highest.
Before that, a human would detect these "odd" cases. He/She took the time to reclassify them and could escalate the issue to the technical teams so they could adapt the assignment model.
After automation, these cases are overlooked—we're faster, but we become blind to these exceptions.
What automation really eliminates
It would be wrong to say that when we automate, we also eliminate human judgment in the strict sense.
Indeed, to implement an automated process:
basic rules are established
if necessary, certain compromises can be hardcoded into the code
and other exceptions can be deliberately ignored or accepted
This process is defined and validated by humans, at least in the case of traditional automation.
The difference with AI, therefore, is not the absence of human rules, but the fact that these rules take the form of statistical correlations, sometimes impossible to unfold in a linear fashion.
We no longer always know the precise logic that led to a particular decision—only that it is consistent with the model.
This eliminates its capacity to exercise judgment when reality diverges from the model.
Only a human can exercise judgment when they feel that "something is wrong"—simultaneously deciding not to follow the rule and adapting to a situation that deviates from the norm.
So what we are eliminating is:
the capacity for questioning,
the ability to identify systemic shortcomings.
Yet, it is precisely these areas that allow a system to remain alive and evolve over time.
Therefore, the architect must not only ask whether a process can be automated, but also what level of complexity that process was able to absorb.
Automation is a design choice
From an architectural perspective, automation is never a neutral act. It's a fundamental design decision.
It impacts:
responsibility (who fixes the bugs?),
governance (who can challenge the process?),
scalability (how do we adapt when the context changes?).
The specific role of the architect
When faced with an AI-powered automation project, the architect's role is to ask uncomfortable questions.
For example:
What exceptions did this process implicitly handle?
What decisions were made on a case-by-case basis?
What level of flexibility are we willing to sacrifice?
These questions may slow down the project, but as is often the case, they primarily prevent the creation of a rigid system that is difficult to fix and disconnected from real-world conditions.
Improving doesn't always mean automating.
Improving a system can sometimes involve:
clarifying the need,
simplifying the process,
better training for teams,
or redefining responsibilities.
AI can be an excellent solution. But only after understanding what we are truly trying to improve.
As we explained in the first article of this series, technology (automation) is only a means to an end, and should never become the end itself.
Conclusion — Advice to the Architects
Before automating a process with AI, ask yourself this question:
What exceptions did this process make visible, and which automation will inevitably mask?
If you don't have a clear answer, then automation risks masking exceptions rather than addressing them—and freezing the system in a state where it can no longer evolve.
A good architect doesn't try to make systems faster at all costs. They strive to preserve their ability to be controlled, challenged, and adjusted in the face of unforeseen situations.
And sometimes, the best improvement is simply allowing time for these exceptions to emerge and be understood.


Comments