AI Should Remove Steps, Not Create New Ones
By Kham Inthirath
December 1, 2025
AI is supposed to make work easier. But for a lot of teams, it’s doing the opposite.
Instead of fewer steps, there are more:
- more tabs
- more copy-pasting
- more reminders to “use the AI”
When that happens, the issue usually isn’t the technology, but how it’s been introduced. AI works best when it disappears into the workflow and when it shows up automatically, exactly where the work already happens.
When it doesn’t, when people have to remember to open a tool, paste something in, or take an extra step, it creates friction instead of leverage.
That’s the difference between layering AI on top of work and building it into how work already moves. And it’s why some teams quietly stop using AI after the initial excitement wears off.
The Common Mistake: Treating AI as a Layer
Most teams don’t set out to make AI more complicated. They’re trying to move fast.
A tool gets approved. Someone runs a pilot. Early results look promising. And instead of redesigning the workflow, AI gets added on top of whatever already exists. That’s when the trouble starts.
Layered-on AI usually shows up as:
- “Run this through ChatGPT before you send it.”
- “Paste your notes into the AI tool after the call.”
- “Don’t forget to summarize this with AI.”
Each request sounds small, even reasonable. But together, they create a pattern:
AI becomes one more step people have to remember.
And if a system depends on memory, discipline, or goodwill to function, adoption will always be fragile.
The issue isn’t that people resist AI. It’s that they resist extra work.
When AI is layered on, it asks users to:
- stop what they’re doing
- switch contexts
- decide whether to use the tool
- then return to the original task
That friction compounds quickly, especially under pressure. Which is why many AI initiatives don’t fail outright but slowly fade because the workflow never changes.
What Built Into the Workflow Means
When people hear “built into the workflow,” it often sounds abstract. In practice, it’s very simple.
Built-in AI doesn’t ask people to do anything new. It removes decisions instead of adding them.
The user doesn’t have to remember:
- when to use the tool
- which prompt to run
- where to paste the output
The work moves forward, and AI shows up only in the result.
That’s the real distinction.
Built-in AI is invisible until the moment it creates value.
It lives inside the systems people already use. It runs automatically, based on what’s happening in the workflow, not on someone remembering to trigger it.
A good test is this:
If someone can forget AI exists and still benefit from it, it’s built in.
When AI is designed this way:
- adoption takes care of itself
- consistency improves
- trust increases
Not because people were trained harder, but because the system was designed better.
This is also why built-in AI scales, while layered-on AI doesn’t.
When usage depends on habit or discipline, it breaks under pressure.
When it’s embedded in the workflow, it holds up when things get busy.
Here’s what that difference looks like in practice.
Layered On vs Built In
Same AI capability. Very different experience.
| Workflow Moment | Layered-On AI | Built-In AI |
|---|---|---|
| After a sales call | Rep pastes notes into ChatGPT | Call notes auto-summarized and saved to CRM |
| Follow-ups | Reminder to “use AI to draft email” | Follow-up drafted automatically from call outcome |
| Forecasting | Export → clean → upload → summarize | Weekly summary generated from live CRM data |
| Content drafts | Copy/paste prompts into an AI tool | Draft created inside existing doc or CMS |
| Approvals | AI output reviewed ad hoc | Review checkpoints built into workflow |
Examples
#1. Sales Follow-Ups
Layered on:
Reps are told to “use AI to write better follow-ups” after calls.
- That means:
- remembering to open a separate tool
- pasting notes in
- tweaking the output
- then sending the email
It’s optional, inconsistent, and easy to skip when things get busy.
Built in
Follow-ups are drafted automatically based on call outcomes or deal stage and appear where reps already work.
The rep reviews, adjusts if needed, and moves on.
Result:
More consistent follow-ups, fewer dropped balls, and no extra steps for the team.
#2. Meeting Notes and Handoffs
Layered on
Someone copies raw notes into an AI tool to summarize them, then pastes the summary into the CRM or project doc.
Context gets lost. Timing slips. Ownership is fuzzy.
Built in
Call summaries are generated automatically and saved directly to the record where the next person expects to find them.
No copying. No guessing which version is current.
Result:
Cleaner handoffs and fewer “can you recap this?” messages.
#3. Weekly Reporting
Layered on
An analyst exports data, cleans it, uploads it to an AI tool, then writes a summary every week.
It works — until that person is out, busy, or rushed.
Built in
A weekly summary is generated on a schedule using live data from source systems.
The same structure. The same timing. Every week.
Result:
Leadership gets predictable insight without heroics.
#4. Content Drafts
Layered on
Writers are told to “run this through AI first,” then paste the output into the CMS.
Quality varies. Voice drifts. Review cycles increase.
Built in
Drafts are generated inside existing tools with clear context and review checkpoints already defined.
AI accelerates the work without changing how it moves.
Result:
Faster drafts, fewer rewrites, and less friction in approvals.
Across all of these examples, the pattern is the same.
The AI capability doesn’t change. The design does.
When AI removes steps, teams feel relief. When it adds steps, they quietly work around it.
The Hidden Cost of Added Steps
Every extra step in a workflow carries a cost.
Not just in time, but in trust.
When AI adds steps, teams don’t usually complain:
They adapt quietly. They skip the tool when they’re busy. They use it inconsistently. They revert to old habits under pressure.
From the outside, it looks like resistance. In reality, it’s friction doing what friction always does.
The more steps a system requires:
- the more discipline it demands
- the more exceptions it creates
- the harder it is to sustain
AI layered on top of work becomes optional. Optional tools are the first to be abandoned.
There’s also a second, less visible cost.
When AI outputs vary because usage is inconsistent, trust erodes, both in the tool and in the process around it.
Leaders start asking:
- “Can we rely on this?”
- “Why does this look different every time?”
- “Is this really saving time?”
At that point, the conversation shifts away from value and toward risk.
And once AI is framed as risky instead of helpful, progress slows dramatically. That’s why the biggest danger of added steps isn’t inefficiency. It’s that AI loses credibility before it ever earns trust.
A Simple Test Leaders Can Use
Before approving any AI use case, there’s a simple test that catches most problems early. This test isn’t about slowing teams down. It’s about protecting them from tools that look helpful but quietly create drag.
Ask three questions:
1. Does this remove a step or add one?
If AI requires an extra action from the user, pause.
That’s usually friction disguised as innovation.
2. Does someone have to remember to use it?
If success depends on memory, habit, or reminders, adoption will be fragile, especially when things get busy.
3. Does the output appear where the work already happens?
If people have to go looking for AI output, it’s been layered on.
If you can’t answer these clearly, that’s a signal, not a failure. It means the workflow hasn’t been designed yet. Pausing to ask these questions prevents more waste than any policy or training ever will.
Why This Is a Leadership Issue, Not a Tool Issue
At some point, AI adoption stops being a technology question and becomes a leadership one.
Tools don’t decide how work flows. People do.
If AI is layered on, it’s rarely because the tool was chosen poorly. It’s because no one was responsible for redesigning the workflow end to end.
Teams can experiment.
They can test ideas.
They can surface opportunities.
But they can’t remove steps, redefine ownership, or change how work moves across functions without leadership involvement. That’s not a failure on their part. It’s a boundary.
AI magnifies whatever structure already exists.
If workflows are unclear, AI makes that obvious. If ownership is fragmented, AI amplifies the fragmentation, which is why AI often feels chaotic in organizations that are otherwise well run.
Not because leaders are doing something wrong, but because AI exposes decisions that were previously implicit. And implicit decisions don’t scale.
When leaders take ownership of workflow design:
- AI removes friction instead of adding it
- adoption becomes natural instead of forced
- trust grows because results are consistent
That’s when AI stops feeling like an experiment and starts behaving like infrastructure.
If AI Adds Steps, Pause
Most teams don’t need more tools.
They need fewer handoffs, clearer ownership, and systems that work the way people actually work.
That’s where AI delivers its real value as part of the flow, not an extra layer.
If AI feels heavier instead of lighter, something upstream needs attention.
A focused, 90 minute AI Snapshot is designed for exactly this moment:
- map a single workflow end to end
- identify where steps can be removed
- decide where AI belongs and where it doesn’t

