Wimbledon: What a Line-Call Controversy Tells Us About Technology in the Workplace
This year’s drama on the manicured grass courts of Wimbledon didn’t come from a five-set thriller or a surprise upset. It came from a mistake in the line-calling technology - except it wasn’t the
technology’s fault.
On a crucial point in a match between Britain’s Sonay Kartal and the Russian, Anastasia Pavlyuchenkova, a long backhand by Kartal was not called out by the automated system. Cue the outrage. Pundits, fans, and players piled on, blaming the robotic line judge for getting it wrong. Except it didn’t. It turned out a human had turned it off. The tech was fine. The human interface wasn’t.
Wimbledon has since made a quiet but crucial update: they’ve modified the system so that people can’t accidentally switch it off. It’s a small technical fix - but it reveals something much bigger about our collective relationship with technology. And it holds a powerful lesson for every business trying to navigate the rise of technologies such as AI in the workplace.
The Myth of Machine Infallibility
There’s a curious psychological double standard at play. When humans make mistakes, we shrug: "We’re only human." But when technology slips, we rage: “How could it get that wrong?”
We expect perfection from machines. Especially when they wear the branding of AI, automation, or data science, we assume they’re immune to error. But we forget: most technology still relies on human setup, supervision, and sometimes literal on/off switches.
The truth? Most technological failures aren’t technical at all. They’re human.
It’s Not the Tech. It’s the Touchpoints
This Wimbledon debacle is a perfect analogy for what's happening in many organisations today. A lot of companies are wary of adopting AI tools in case they get anything wrong. They are testing these tools in a variety of ways, but very few are seriously interrogating the human layer of these systems.
- Do people know how to use them correctly?
- Are the right guard rails in place to prevent misuse or confusion?
- How is the organisation adapting its culture, roles, and expectations around these tools?
Just like line-calling tech, an AI system is only as effective as the people deploying and interacting with it. And just like Wimbledon, organisations will keep running into issues until they acknowledge that people - not the tools - are usually the weak link.
Cultural Mismatch > Technical Glitch
Beyond training, there’s a deeper challenge:cultural readiness.
Introducing AI into the workplace isn’t just a productivity boost. It’s a psychological shift. It challenges long-held assumptions about expertise, creativity, and even job security. It forces teams
to rethink what they value in human work and how they collaborate with non-human colleagues.
If that cultural adaptation doesn’t happen,the tech will stall. Or worse, it will be blamed unfairly when something goes wrong.
The opportunity
Technology such as AI offer so much to augmenthuman potential, but that augmentation depends on two things: robust, practical training on the tools themselves, and deep cultural work to ensure people are ready to change how they work, learn, and lead.
Because when it comes to AI in the workplace, the question isn’t
“Can we trust the technology?” It’s “Can we trust ourselves to work with it in the best way?”
Martin Talks, Matomico, martin@matomico.com