Fix This Before You Automate Anything
By Kham Inthirath
December 1, 2025
Automation is not a cleanup tool. But when work feels messy, automation is tempting.
It promises order. Speed. Relief.
But automation doesn’t clean things up.
It hardens whatever already exists.
If a process is unclear, automation won’t clarify it.
It will simply move the confusion faster.
This is the mistake we see most often: teams reach for AI because a workflow feels inefficient, inconsistent, or painful. They hope the tool will impose structure after the fact.
It won’t.
AI doesn’t decide what matters.
It doesn’t resolve ambiguity.
And it doesn’t fix ownership gaps.
It executes.
That’s why automating too early often makes problems louder, not smaller. Errors propagate. Edge cases multiply. And people start working around the system instead of with it.
Automation works best after the thinking is done.
Once the process is clear.
Once the handoffs are understood.
Once it’s obvious where judgment belongs.
Until then, automation isn’t leverage.
It’s acceleration without direction.
The Pattern We See Before Automation Fails
Before automation breaks, it usually sounds reasonable.
You’ll hear things like:
“Everyone does this a little differently.”
“We’ll standardize later.”
“Let’s just automate the obvious parts.”
“The tool will help enforce the process.”
None of these are red flags on their own. But together, they’re a pattern. They signal that the process hasn’t actually been decided yet.
When teams automate at this stage, they’re not encoding clarity. They’re encoding assumptions.
Each person fills in the gaps differently. Edge cases get ignored. Exceptions pile up. And when something goes wrong, no one is quite sure whether the issue is the tool, the process, or the people.
So teams compensate:
- adding manual checks
- layering on approvals
- creating side documents to explain “how it’s really supposed to work”
The automation technically runs.
But the work around it grows.
That’s the quiet failure mode most leaders don’t see until later. AI didn’t create the confusion. It removed the buffer that used to hide it. And once automation is in place, fixing the underlying process becomes harder, not easier, because now the ambiguity is baked into the system.
That’s why the smartest teams don’t ask,
“Can we automate this?”
They ask,“Do we agree on how this should work?”
If the answer isn’t clear, automation can wait.
The Four Things to Fix First
Before you automate anything, there are a few fundamentals that need to be settled.
Not optimized.
Not documented perfectly.
Justdecided.
This is the work automation can’t do for you.
1. Ownership
Someone has to own the outcome.
Not the task. Not the tool. The result.
Who decides if this worked? Who is accountable when it doesn’t?
If ownership is shared vaguely or rotates informally, automation creates confusion faster than clarity. Errors turn into debates. Fixes turn into meetings.
Automation needs a single point of responsibility to anchor to.
2. Inputs That Matter
Most workflows contain more information than they actually need.
Before automating, you need to know:
- what inputs are required
- what’s optional
- and what should be ignored
AI doesn’t sort this out on its own. It treats all inputs as equally important unless told otherwise.
Automating without clarifying inputs guarantees inconsistent outputs, and a lot of “why did it do that?” conversations later.
3. Decision Points
Every workflow contains moments where judgment matters.
Where a human should:
- pause
- review
- or override the default
If you haven’t identified those moments, automation will blow past them.
That’s not a tooling issue. That’s a design omission.
Deciding where automation must stop is just as important as deciding where it should run.
4. Define What Success Looks Like
Finally, you need a clear definition of success.
Not “this feels faster.”
Not “people seem to like it.”
What actually changes if this works?
- time saved
- errors reduced
- throughput improved
Without this, automation feels busy instead of valuable, and leadership has no basis for deciding what to expand or shut down.
What This Looks Like in Practice
The difference between automating too early and fixing first isn’t theoretical.
It shows up in very ordinary, everyday workflows.
These are the moments where automation either compounds value or creates drag.
Example 1: Sales Follow-Ups
Automated too early
Sales reps are told to “use AI to write better follow-ups” after calls.
That usually means:
- opening a separate tool
- pasting in notes
- tweaking the output
- then sending the email
It works when reps remember, but risks falling apart when things get busy.
Fixed first
The team agrees on what a good follow-up should include.
Then follow-ups are drafted automatically based on call outcomes or deal stage and appear directly in the CRM.
Reps review, adjust if needed, and move on.
Result:
More consistent follow-ups, fewer dropped balls, and no extra steps added to the day.
Example 2: Meeting Notes and Handoffs
Automated too early
Someone copies raw notes into an AI tool to summarize them, then pastes the summary into a doc or system someone else is supposed to read.
Context slips. Versions multiply.
The “official” summary is never quite clear.
Fixed first
The team agrees on what needs to be captured and where it should live.
Summaries are generated automatically and saved to the record where the next person expects to find them.
Result:
Cleaner handoffs and far fewer “can you recap this?” messages.
Example 3: Weekly Reporting
Automated too early
An analyst exports data, cleans it, uploads it to an AI tool, and writes a summary every week.
It technically works, but only because one person holds it together.
Fixed first
Leadership agrees on what questions the report is meant to answer.
A consistent summary is generated on a schedule using live data from source systems.
Result:
Predictable insight without heroics or single points of failure.
Example 4: Content Drafts
Automated too early
Writers are told to “run drafts through AI first.”
Quality varies. Voice drifts. Review cycles grow.
Fixed first
Expectations are clarified up front: tone, structure, and what “good” looks like.
Drafts are generated inside existing tools with clear review points already defined.
Result:
Faster drafts, fewer rewrites, and less friction in approvals.
Across all of these examples, the pattern is the same.
The AI capability doesn’t change. The design does.
When teams fix the process first, automation removes steps.
When they don’t, automation just moves the mess faster.
Why Automating Too Early Gets Expensive Fast
The cost of automating before fixing the process isn’t always obvious at first.
The system runs.
The dashboards light up.
Something is technically happening.
But underneath, the drag starts to accumulate.
Every unclear step creates exceptions.
Every exception creates manual work.
Every manual workaround becomes tribal knowledge.
Instead of saving time, automation shifts the burden:
- from execution to supervision
- from doing the work to explaining the work
- from clarity to coordination
Teams spend more time managing the automation than benefiting from it.
There’s also a hidden cost leaders feel before they can name it: trust erosion.
When outputs are inconsistent, people stop relying on them.
When results vary, leaders stop using them in decisions.
And when AI can’t be trusted, it gets sidelined, regardless of how much was invested.
At that point, automation hasn’t just failed to deliver ROI.
It’s created skepticism that makes future efforts harder to justify.
This is why “we’ll fix it later” is so risky.
Once automation is live, every correction feels like rework.
Every clarification feels political.
And every pause feels like backtracking.
Fixing the process first may feel slower in the moment.
Fixing automation fallout is always slower in the end.
Before You Automate Anything: Questions to Ask
Before approving any automation, it helps to pause — not to slow your team down, but to prevent avoidable waste.
There’s a simple check that catches most problems early.
Ask three questions.
1. Do we actually agree on how this should work?
If different people would describe the process differently, automation will only lock in those differences.
Agreement doesn’t require perfection.
It requires a shared baseline.
2. Do we know where judgment still matters?
If every edge case is treated as “we’ll see what happens,” automation will make decisions no one intended to delegate.
Deciding where automation must stop is just as important as deciding where it should run.
3. Do we know what success looks like if this works?
If you can’t say what will change (time saved, errors reduced, handoffs improved, etc.) automation becomes
activity without direction.
If any of these answers are unclear, that’s not a failure. It’s a signal.
It means the process needs a bit of thinking before it needs a tool.
Where to Start (Without Overcommitting)
If you’re not sure what to fix before automating, that’s normal.
Most teams don’t need a full AI rollout.
They need a clear view of
one workflow,
one bottleneck, and
what needs to change first.
That’s exactly what the 90-Minute AI Snapshot is designed for.
It’s a focused working session to:
- map a single real workflow end to end
- identify what needs to be clarified before automation
- decide where AI helps and where it would just bring confusion and drag
No new tools or obligation to move further. Just a clear, defensible starting point you can act on or confidently walk away from.

