Skip to content

The efficiency trap: AI gave us time back... So where did it go?

For the past couple of years, one promise has been repeated in nearly every conversation about AI in business: it will free us. Free us from the repetitive, the mundane, the time-consuming. And with that freed-up time, we would finally do what we’ve always said we don’t have enough time for: think deeply, develop new ideas, innovate and inspire; simply build something better. What if the time AI saves us never goes where we intended?

KnowIt0137-2024.jpg

It’s a compelling narrative. It shows up in everyday discussions, transformation roadmaps, in board presentations, in the way consultants - myself included - frame the case for change. AI handles the routine so humans can focus on what matters.

The logic is clean. Almost too clean. Because buried inside it is an assumption that no one seems to question when it comes to AI: that if we just had more time, we would naturally use it well. Write it down and it becomes almost laughable. When has more time, on its own, ever led us to do better?

Here is what I keep seeing instead

An organisation adopts AI tools. Processes get faster. Reports that took a day now take an hour. Content that required a week of back-and-forth gets produced in an afternoon. Real, measurable gains.

And then something predictable happens. The freed-up hours don’t become space for reflection or development. They become capacity. Capacity for the next deliverable, the next cycle, the next deadline that just got tighter because, well, now we can.

Efficiency creates more efficiency. The loop closes before anyone notices it was open. The time that was supposed to go to thinking goes to more doing. Just faster.

This isn’t anyone’s deliberate choice, I assume. It’s the natural pull of organisations under pressure. When you suddenly have more capacity, the most intuitive response is to fill it. Not to protect it.

And then there's the context we're actually operating in

The economy is tight. Budgets are being cut. Headcount is under scrutiny. In this environment, the efficiency narrative doesn’t just dominate. It becomes the only narrative.

When resources are scarce, development is among the first things to go. Innovation resources shrink. The space for experimentation closes. The time AI frees up doesn’t flow towards building something new. It flows towards survival: doing more with less, holding the line, keeping output steady with fewer people.

There’s a painful paradox in this. Precisely when organisations need to think more carefully about their future, the economic pressure makes it harder to justify thinking at all. Thinking doesn’t show up in next quarter’s numbers. Efficiency does.

But here's a question I find is almost never asked:

Whose benefit is AI’s saved time actually calculated for?

There are at least three different logics at play, and they point in very different directions.

1. There's the owner's logic – a financial perspective:

The saved time is a cost reduction. More output per person, faster delivery, better margins. The efficiency gain goes straight to the bottom line.

2. There's the organisation's logic – a development perspective:

The saved time is an investment. It goes into development, strategic thinking, building capabilities that didn't exist before. The gain is future-facing.

3. And there's the customer's logic – an experience perspective:

The saved time translates into something better on the other end. A more thoughtful solution. A product developed with more care. The gain is felt by the people the organisation exists to serve.

Each of these logics, in its own way, enables sustainable business. The question is not which one is right, it’s which one we choose to prioritise, and whether we make that choice deliberately.

Most organisations I encounter default to the first logic. Not because they’ve chosen it, but because in the current economic environment it’s the path of least resistance. The efficiency metric is easy to measure, easy to report, easy to defend. The other two require a deliberate decision and harder work in motivating it. It means asking uncomfortable questions: why is a deliberate choice an imperative we need to act upon? What do we stand to gain now and in the long run? And how do we measure the success of a choice that doesn’t show up in this quarter’s numbers?

These are not easy questions. Which may be exactly why they so often go unasked.

Zoom out further and the question gets bigger

If AI produces a significant efficiency gain across industries, across society: where does all of that freed-up capacity go?

Does it go into services that are developed further and deeper? Into products that are more carefully considered? Into work that is more thoughtful, not just more productive? Into creating sustainable growth? Or does it simply mean that we do the same things with fewer resources and call it progress?

This is not a technology question. It’s a question about values. About what we believe efficiency enables. I embraced AI eagerly, believing that soon I would be able to do more of what I do best: think. Now, well, I’m not so sure. There are days when I feel I might be thinking less.

I don't have a tidy answer to this

I’m not sure one exists. But I am increasingly convinced that the most important question about AI is not the one we keep asking.  The question is not how AI saves us time.

The question is: for whose benefit, and toward what end, do we use what it frees up?

That’s a strategic choice. It’s a leadership choice. And it’s one that most organisations haven’t made yet, perhaps because they haven’t realised it’s theirs to make.

Look at your own organisation. Where did the saved hours go last quarter? Into development, into deeper thinking, into something your customers can feel, into progress? Or into the next round of doing more with less?

The efficiency trap is not inevitable. But escaping it requires noticing you’re in it.

Further reading

The idea that AI-driven productivity gains can become self-defeating is gaining traction in both research and practice. For those interested in exploring the broader conversation on Efficiency trap:

  • Ranganathan, A. & Ye, B. (2026). “AI Doesn’t Reduce Work – It Intensifies It.” Harvard
    Business Review
    , February 2026. Research showing that AI tools consistently intensify work rather than reducing it: employees work faster, broader, and longer, often without being asked.
  • Knowledge at Wharton (2025). “The AI Efficiency Trap: When Productivity Tools Create Perpetual Pressure.” An analysis of how AI productivity gains automatically become new performance baselines, creating escalating expectations rather than freeing workers for strategic thinking.
  • Winter, R. (2026). “The Chat Trap: When AI Makes Your Thinking Softer.” A sharp examination of how casual AI use can quietly erode judgement by replacing genuine thinking with fluent prose.