top of page
blog.png

Subscribe to our newsletter

Thanks for subscribing!

Subscribe to receive new articles when they’re published.
Articles only — no promotions or commercial offers.

When the solution is defined before the need #2


This article is part of a series in which I try to ask the right questions about AI as an architect.

On social media, there's a phrase I hear more and more often, sometimes explicitly, sometimes implicitly:


We want to do AI.

We don't say "we have this problem," or "we're looking to improve this aspect."

Here, we're putting the solution before the need.


Because we may have forgotten, but AI is a technology—not a business process to be implemented, unlike concepts such as "lead qualification," "sales cycle management," or "after-sales customer support."


As an architect, this reversal seems to me to be the major problem with the current wave surrounding AI.


Let me explain.


The "How" before the "Why"


Historically, a well-defined project began by addressing the pain points.


What's wrong with the current process? What are we trying to correct or improve?


  • A process that is too slow?

  • a decision that is too complex?

  • an inconsistency between tools?


Then came technology, as a possible answer to the needs.

And we should really say "technologies" because, so far, I haven't seen a universally applicable solution.


But that was then... Today, we're doing the opposite.

The technology is here, available, powerful, trendy, and then we look for where to apply it. At any cost.


From conception to justification


When the solution (technology) arrives before the need (problem), the role of an architect changes radically.


They are no longer asked to design a response to the need using a specific technology.

They are asked to justify a choice that has already been made — and let's be clear, most of the time these are political, or financial, choices that were made long before the architect was consulted.


And since this has to be accepted, the narrative will be adapted to the desired reality, and the indicators that support this narrative will be selected.

(Note: This is what's called confirmation bias; I'll write an article on the subject.)


Why this approach is dangerous


Deciding on a technology simply because it's available will create problems in terms of discussions, budgets, and especially expectations.


  • The right questions will no longer be asked—because they might conflict with the chosen solution.

  • Alternatives won't be explored—because the choice has already been made.

  • Business constraints will be minimized—because, fundamentally, no consideration was given to the business need.


So, while the project may move forward, very quickly since the entire technical scoping phase was ignored, the project's foundations will be fragile, even completely unstable.


And when problems arise later—low adoption, business misunderstanding, side effects—it's too late to reconsider the initial approach.


Concrete example


Imagine a marketing team that decides to use AI to personalize email campaigns.

On paper, the objective is attractive.

But is it really an objective?


The essential question isn't being asked: What is the real problem today?

Again, we've started with the solution, not the need.


Is it:

  • a click-through rate that's too low, which could be explained by overly generic content?

  • over-solicitation of customers due to a lack of alignment between interactions across different channels?

  • a lack of mastery of the customer lifecycle?


Without this clarification, it's impossible to solve a problem, since the problem is unknown.


And yes, we can certainly implement AI even without this.

It will segment, personalize, and optimize.


But who's to say that, ultimately, it won't try to optimize the wrong strategy?


The trap of “they do it, so we must do it too”


One argument often comes up to justify this type of approach:

“We can't ignore it.”

From an architectural perspective, this is a very poor argument.


Technology only makes sense if it’s used to evolve, improve, and progress.


What matters isn’t avoiding missing out on a technology, but rather not missing:


  • a real problem,

  • a concrete opportunity,

  • or a poorly identified risk.


Confusing the two means accepting that technology drives business, not the other way around.



Conclusion — Advice to Architects


Prioritizing the problem over the technology doesn't mean rejecting AI. It means giving it the place it's meant to have.


Your workflow should look something like this:


  • the current situation,

  • what isn't working,

  • what needs to be improved,

  • and only then: how?


Within this framework, AI can be an excellent solution. Or not.

And both are valid decisions.


If you're an architect and the topic of AI comes up in discussions, ask this question:


What problem would still exist if AI didn't exist?

If no one can answer that, then the project isn't ready, and the business goal needs to be defined first.


Technology should never be a starting point. It should be a consequence.

And the architect's role, now more than ever, is to ensure that this order is respected.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page