Skip to content

AI Isn’t the Problem, It’s How We’ve Rushed to Use It

TL;DR:

AI isn’t the problem, it’s how we’re rushing to use it.
This post explores how poor implementation, cost-cutting culture, and a lack of empathy have turned helpful tech into a frustrating experience for many customers. From rigid chatbots to inaccessible systems, I reflect on my own experiences, where we’ve gone wrong, and why the real solution still involves humans.

Looking Back to Move Forward

Throughout history, people have feared innovation. It sounds dramatic, even alarmist, but look back and the pattern is clear:

every breakthrough has brought discomfort before it brought progress.

Take the wheel. A deceptively simple concept, first appearing around 4200 BCE in Mesopotamia first used for pottery, it changed the world forever. The wheel redefined logistics and engineering, reducing the labour required to move heavy materials. With it came new demands: maintenance, infrastructure, and even early research and development. It redefined logistics and engineering, reducing the labour required to move heavy materials. With it came new demands: maintenance, infrastructure, and even early research and development.

Fast forward to the industrial revolution. Weaving, once a highly skilled trade, became mechanised. The flying shuttle (1733), spinning jenny (1764), and power loom (1785) radically increased productivity. But they also devastated local economies reliant on handloom weaving. The Luddites didn’t destroy machines because they hated progress. They did it because they were being left behind. And yet, from that destruction came new industries, service roles, and forms of skilled labour.

The pattern has never changed. Innovation arrives, fear follows, and eventually, society adapts.

Before we get to the modern day, it’s worth reflecting on a few more recent historical examples.

The invention of the Automated Teller Machine (ATM) in the late 1960s was seen initially as a novelty. The UK’s first ATM was installed by Barclays in Enfield in 1967. It used slightly radioactive paper vouchers to dispense cash, and the idea of trusting a machine with your money felt bizarre to many. But over time, it reduced queues, freed up bank staff, and allowed 24/7 access. Bank tellers didn’t vanish — their roles evolved into financial advice, fraud handling, and relationship management.

The emergence of the internet in the 1990s had a similarly seismic effect. What began as a niche tool for academics and military networks rapidly became a household utility. It disrupted high street retail, encyclopaedias, video rentals, and media, but also spawned entirely new industries: web development, e-commerce, content creation, cybersecurity.

Again, the same pattern: disruption, fear, and then adaptation.

But progress also comes with trade-offs. As internet banking became the norm, bricks-and-mortar bank branches began to disappear. For many, that was a convenience, but for elderly and vulnerable individuals, it often meant reduced access to basic financial services. It’s a reminder that innovation must be inclusive, not just efficient.

From Human Support to Scripted Hell

Thirty years ago, customer service call centres invested in people. First-line support would listen, triage, and escalate if needed. The experience wasn’t always perfect, but at least you felt heard.

Then someone realised that most calls followed predictable patterns. Scripts were introduced. Flowcharts followed. Call agents were soon bound by rigid frameworks that penalised deviation. Even when a customer clearly understood their issue, the system insisted they go through every irrelevant step.

Then came outsourcing. Scripts became essential guardrails for offshore agents who often lacked deep domain knowledge. The process shifted from service to containment.

And now? We’ve taken those same flawed systems and handed them to AI.

Chatbots Are Just Bad Scripts in Disguise

My recent experiences speak volumes:

  • A credit card voice assistant trapped me in a loop asking for a card number I didn’t have. Every attempt to explain led me back to the same polite dead end.
  • A faulty white goods delivery left me fighting with a chatbot that didn’t understand the situation. None of the available options fit, and there was no way to break out.
  • A premium IT supplier’s AI phone system started reading every line item from my order aloud, with no way to interrupt or skip ahead.
  • A broadband provider’s support AI offered only generic scenarios. Anything outside a password reset or billing query simply didn’t compute.

To make matters worse, some companies now deliberately keep customers on hold for 15+ minutes, hoping they’ll give up and return to the broken AI chatbot. It’s not just frustrating — it’s deeply dismissive. It tells the customer: you’re no longer our priority. You’ve paid. Now please stop bothering us.

This kind of approach doesn’t just reflect poor customer care, it reflects a mindset where customer needs are an inconvenience. Part of the problem might lie in shareholder-driven thinking, where cost-cutting and short-term gains are prioritised over long-term customer loyalty. But it also feels like some companies have simply grown so big, so dominant, that they no longer need to care. When customers have limited alternatives, complacency becomes the norm. And that mindset is what AI ends up scaling.

These short-sighted strategies, often masked as “blue sky thinking” — undermine confidence in AI rather than build it.

When innovation is used to sideline the human experience, it doesn’t feel like progress — it feels like abandonment.

What We’re Teaching AI Might Be the Problem

The real kicker?

These rigid, outdated scripts — full of frustration and inefficiency — are now the training data for modern LLMs.

In trying to scale customer service, many companies are amplifying the worst parts of their processes, not fixing them.

We’re feeding AI the wrong lessons.

AI isn’t inherently bad. But if it’s trained on flawed interactions, it will replicate and scale those flaws. If it isn’t taught to recognise nuance, emotion, or urgency, it will never know when to escalate. It will just keep looping.

It Doesn’t Have to Be This Way

We also need to consider the accessibility of AI systems for vulnerable groups — especially those with learning difficulties, cognitive impairments, or language barriers. While automation may streamline common queries, it can be overwhelming or confusing for those who rely on clear, compassionate human interaction. For these users, the ability to break out of an AI loop and speak to a real person isn’t just a convenience — it’s a lifeline.

Yes, AI can be calm, tireless, and consistent. But it lacks the one thing that real customer service professionals can offer: empathy. Not counselling. Just the ability to recognise when someone needs a different kind of help.

That takes more than algorithms. It takes thoughtful training, collaboration, and investment.

We didn’t build call centre decision trees overnight. It took years of iteration and insight. Today’s AI needs the same commitment.

AI needs collaboration with humans, not replacement of them.

And yes, even the best AI will still face abuse from some users. But good systems aren’t built to be perfect. They’re built to be resilient enough to know when to hand control back to a human.

The True Test of Any Company

There will come a time when AI works as intended, and when it does, it may finally reduce friction, improve experience, and free people to focus on what really matters.

We’re just not there yet.

And at the end of the day, the true test of any company or service provider isn’t how well they sell you something. It’s how well they respond when something goes wrong.

Let’s Talk About It

Listen to the AI-generated podcast version of this post on YouTube:

I’d love to hear from others who’ve lived through this shift, whether as a customer trying to resolve a frustrating issue, or as someone involved in designing, developing, or deploying AI systems.

Have you seen it done well? Terribly? Have you been involved in training a model or setting up a virtual assistant that actually helped someone?

Drop your thoughts in the comments or connect with me directly. Let’s build a better conversation around how AI can support us, without replacing what makes service human.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.