When Not to Use AI: A Framework for Saying No
Every tool has limits. Here's how to recognize when AI isn't the answer-before you waste time and money finding out.
We make our living helping companies implement AI. So it might seem strange that we spend significant time talking clients out of AI projects.
But it's essential. The fastest way to destroy trust in AI-and waste budget-is to use it for the wrong things.
The Bad Fits
Some patterns we've learned to recognize:
When Accuracy Is Non-Negotiable
If a 2% error rate is unacceptable, current AI probably isn't the answer. Medical diagnoses. Legal advice. Financial regulations.
AI can assist in these domains, but it can't be the final authority. The liability alone should give you pause.
When the Stakes Are Asymmetric
What happens when the AI is wrong? If the cost of a false positive vastly exceeds the benefit of a true positive (or vice versa), traditional systems with explicit rules often perform better.
AI optimizes for average performance. Sometimes you need guaranteed performance.
When Trust Is the Product
Will your customers accept AI making decisions that affect them? Sometimes yes. Sometimes absolutely not.
An AI-generated product recommendation is fine. An AI-generated denial of an insurance claim creates PR nightmares. Know your users.
When Explainability Is Required
Regulatory environments often require explanation. "The model predicted this outcome based on pattern matching across millions of data points" doesn't satisfy auditors.
If you need to explain exactly why a decision was made, rule-based systems still have their place.
When Data Doesn't Exist
AI needs data. Lots of it. Ideally labeled and clean.
If you don't have historical data for your use case, AI can't learn from it. Starting from scratch means starting with humans-and building data collection into your process.
The Decision Framework
Before committing to AI, answer these questions:
1. What's the failure mode?
How will the AI fail? How often? What's the impact? Can you detect failures before they reach users?
2. What's the human alternative?
Would humans do this task better? Faster? More cheaply? Sometimes the answer is yes.
3. What's the integration cost?
AI doesn't exist in isolation. What changes to your existing systems are required? What's the total cost of ownership?
4. What's the maintenance burden?
AI systems degrade. Models drift. Data changes. Do you have the capability to maintain this long-term?
5. What's the opportunity cost?
What else could you build with the same resources? Sometimes the unglamorous solution is the right solution.
The Hybrid Path
Often the answer isn't "AI" or "no AI" but "AI with guardrails":
These hybrid approaches capture most of the value while managing most of the risk.
When to Revisit
Technology evolves. What doesn't work today might work in six months. Keep a "not yet" list of AI opportunities that weren't ready-and revisit it periodically.
But also be honest about projects that failed. Why? Can those conditions change? Or is this fundamentally a bad fit?
The Real Skill
Knowing when to use AI is valuable. Knowing when not to use AI is equally valuable-and rarer.
The companies that succeed with AI long-term are the ones that make both calls well.