Good Work Can Turn Bad — Even When Nobody Did Anything Wrong

The most common quality failure in fast-moving businesses isn't a dramatic shortcut. It's output drift — a slow erosion where every iteration is reasonable, but the cumulative effect is a standard nobody noticed slipping.

Here’s something I don’t hear talked about enough.

Most conversations about quality at work focus on the moment someone decides to cut a corner. The deliberate shortcut. The skipped approval. The brief that didn’t get written.

But in my experience working with founders and teams, the more common quality failure isn’t dramatic at all. Nobody made a bad call. Nobody skipped anything obvious. And yet, three, six, twelve months later, the output is noticeably worse than it used to be.

This is what I’d call output drift. And it’s one of the most insidious quality problems in fast-moving businesses right now.

How drift actually works

It starts with something small and completely reasonable.

Round one: the brief is solid, the execution is careful, the result is good. Everyone’s happy.

Round two: you’re building on what worked. You use last month’s campaign as the reference point. Saves time. Makes sense.

Round three: you’re refining last month’s version. The original brief is somewhere in a Google Drive folder nobody has opened in weeks.

By round five, nobody is working to the original standard anymore. They’re working to the last output. And the last output was already a step removed from the one before it.

Each individual decision was defensible. The cumulative effect is a slow, invisible decline in quality that feels like normal work right up until someone — usually a client — points it out.

Diagram contrasting two iteration paths — one where each round refines the previous output and quality drifts downward, versus one where each round anchors back to the original brief and quality holds steady.
Drift compounds invisibly when each round refines the previous output instead of the brief.

Why AI makes this problem worse

This is where it gets particularly relevant right now.

If you’re using AI to assist with content, proposals, campaigns, or communications — and most of us are — output drift accelerates. Here’s why.

AI models are trained to be helpful and to build on what you give them. If you feed them a previous output as the starting point, they will improve on that output. They won’t go back to first principles. They won’t ask whether the original brief is still being served. They’ll optimise what’s in front of them.

So the drift that used to happen over six months of human iteration can now happen in six rounds of prompting.

That’s not a flaw in the model. It’s the model doing exactly what it was asked to do — improve on the input. The flaw is in how we’re using it.

What actually prevents drift

The answer isn’t to stop iterating. Iteration is how things get better. The answer is to anchor to the original standard, not the last output.

In practice, that means:

  • Before each new round, go back to the brief. Not the last version — the original intent.
  • Ask: does this still solve the original problem, or does it just improve the last attempt?
  • Build in periodic “first principles” reviews — not to slow work down, but to reset the reference point before drift accumulates.
  • If you’re using AI in your workflow, explicitly tell it what good looks like from the start. Don’t assume it will infer it from what you’ve been producing.

Quality doesn’t erode because people stop caring. It erodes because the standard gets replaced — quietly, incrementally, without anyone noticing — by whatever was produced last week.

The system question

If you’re running an AI-assisted operation at any meaningful scale, drift is a system design problem, not a discipline problem.

Telling your team to “remember the brief” doesn’t scale. The brief needs to be embedded in the workflow itself. Every cycle should pull from the original intent, not from the previous cycle’s output. Periodic resets need to be scheduled, not improvised.

That’s a build, not a habit. Which is why we treat it as architecture inside Axia, rather than something we hope humans will remember to do.

The question worth asking in your next team review: are we still working to the brief, or just improving the most recent version of ourselves?


Gina Cheng is V8 Nexus Founder & President and a marketing strategist at V8 Global. Leadership Insight posts examine the structural shifts that change how commercial work gets done.

Axia

Ready to take the next step?

Join London's executive AI community — events, practical intelligence, and curated introductions for established business leaders.

How Axia anchors quality