The default pitch for AI is productivity. Do more. Get there faster. Automate the boring stuff so you can focus on what matters.

This sounds sensible. It is also a trap.

The assumption underneath is that the goal is already correct and you just need to reach it more efficiently. But if the goal is wrong, getting there faster makes things worse. You produce more of the wrong thing, more quickly, with greater confidence.

This is exactly the problem most organisations already have. The frame is fixed. The destination is settled. Nobody is asking whether it should be. AI, used this way, just accelerates the existing direction of travel.

A different kind of usefulness

The most useful thing AI has done for me is not saving time. It is letting me do things I could not do before. I use it to code, which was never in my skill set. I use it as a mirror, to reflect half-formed ideas back to me in a shape I can examine. I use it to think across dimensions that are hard to hold in one mind, and to test theories against a body of knowledge I do not fully possess.

None of this is productivity in the conventional sense. It is closer to capacity. The work that comes out of it is not faster. It is different in kind.

The email test

Consider the people who use AI to write their emails. An automated voice, pretending to be a human voice, sent to another person who may well be reading it through their own AI summary. Two bots talking to each other, with a performance of human connection stretched over the top.

If we agreed to coordinate through automated messages, that would be one thing. But pretending you wrote something you did not, to someone who trusts that you did, is a strange way to use the technology. It does not save time. It removes meaning.

Twice as much, or half as much

You can use AI to get fourteen hours of work done in eight. Or you can use AI to do four hours of work that creates value beyond anything you could have managed alone.

The first is productivity. The second is something else: judgment, quality, discovery. Most people are reaching for the first, because it fits the logic they already operate in. Do more. Bill more. Ship more.

But that logic is the problem, not the solution.

What this looks like inside a company

The distinction matters most at the organisational level, because that is where it either changes something or doesn’t.

A company that adopts AI for productivity gets predictable results. Reports generated faster. Drafts produced in bulk. Meetings summarised automatically. The same work, slightly cheaper, at marginally higher volume. The savings are real but small. And because the work itself has not changed, the organisation learns nothing new about what it should be doing.

A company that adopts AI for capacity looks different. It might use AI to stress-test a strategy against scenarios nobody had time to model. It might let a product designer explore ten directions instead of two, not to ship ten products, but to see which direction reveals something unexpected. It might give a junior analyst the ability to interrogate a dataset in ways that previously required a specialist, and then pay attention to what they find.

The difference is not the tool. It is whether the organisation treats AI as a way to do the same things cheaper or as a way to notice things it was previously unable to see.

The first approach is easier to measure. It shows up in quarterly reports as cost reduction. The second is harder to quantify and easier to ignore, which is why most companies default to the first.

The real risk

The danger is not that AI implementation fails. It is that it succeeds at the wrong thing. An organisation that uses AI to double its output of poorly considered work will look productive by every internal metric. Revenue per employee goes up. Turnaround times come down. The dashboards glow green.

Meanwhile, the thing that actually needed to happen, the question nobody asked, the product nobody built, the shift in direction that would have mattered, gets buried under a faster, more efficient version of the status quo.

AI is not good or bad. Used well, in the right context, with the right intent, it is extraordinary. Used as a way to avoid thinking about what is actually worth doing, it just makes the avoidance more comfortable.

The companies that will benefit most are not the ones that move fastest. They are the ones that use the time AI frees up to ask whether they are solving the right problem. Most will not do this. The ones that do will be difficult to compete with.

Then you realise - this could actually be done with, or without, the AI.


↳ The capacity/productivity distinction is grounded in the purpose/task argument developed in Purpose, Task, and the Problems Nobody Has Named Yet.

↳ For a historical frame on where AI implementation currently sits, see The Stopwatch and the Algorithm.

↳ The risk of optimising the wrong thing at a deeper level is explored in The Mirror, The Map and the Breath.


Garden notes