Exactly 100 years ago, a new tech was about to change the world. Sound familiar?

In January 1926, a small group of people gathered to watch a flickering image appear on a screen. It was unstable, low quality and easy to dismiss. At the time, it would have been reasonable to see it as an interesting technical experiment rather than the beginning of anything significant.

That moment is now recognised as the first public demonstration of live television.

Looking back, what stands out is not how impressive it was, but how little it seemed to matter at the time. The real importance of that moment only became clear much later, once the consequences had played out.

100 years later in January 2026 I thought about this, because AI feels like it sits in a similar place today.

Most of what we see now is imperfect. The outputs are inconsistent, the hype often runs ahead of reality, and there is a lot of noise. And yet something fundamental has already shifted. The capability exists, it works well enough, and it is spreading faster than our norms around work and judgement can adapt.

Television did not replace entire professions overnight, but it quietly changed what was valued inside them. Politics became visual and leadership became performative, whether people liked it or not. Communication skills mattered in different ways. The roles largely stayed the same, but the people who succeeded within them changed.

AI appears to be doing something similar.

A lot of knowledge work used to be defined by execution. Writing, researching, analysing, summarising. These were time-consuming activities that signalled competence, and AI now does much of that quickly and cheaply.

As a result, the value is moving elsewhere; towards judgement, towards framing problems properly. Towards knowing what matters and what does not. Those skills are harder to measure, but they are becoming more important, not less.

Television shifted the economics of work by rewarding reach. One broadcast could influence millions. Effort mattered less than amplification. AI is starting to do something comparable for cognitive work. Small teams can now operate at a scale that previously required far more people. Individuals with good judgement and strong tool fluency can create disproportionate impact.

This helps explain why organisations are becoming leaner, why layers are being questioned and why output alone is no longer a reliable signal of value. The work has not disappeared but the leverage has moved.

There is also a familiar risk. Television blurred the line between watching and participating. Over time, that had consequences for attention, trust and depth. AI carries a similar danger. Prompting can start to feel like thinking and polished output can be mistaken for understanding.

The concern is not lower productivity, it is lower accountability for the thinking behind the work. That has implications for leadership, education and how we assess competence, particularly in hiring.

The television demonstration in 1926 did not change the world that day, it changed the direction of travel. AI feels like one of those moments. Not because it is finished, but because it is already shaping behaviour, expectations and advantage.

The question is not whether this technology will change how we work. It already is.

The more interesting question is whether we use it in a way that strengthens judgement, or quietly replaces it.

History suggests that distinction tends to matter far more than the technology itself.