Skip to content

The Face of Empathy — But Who’s Behind the Smile?

TL;DR

AI isn’t the problem — it’s how we’ve rushed to use it.

From spiralling compute costs to mass data oversharing, this article explores the hidden consequences of our rapid adoption of AI and large language models. We look at:

  • The financial pressures driving change
  • How privacy is being quietly eroded
  • What happens when emotional data is monetised
  • The growing risks of profiling and predictive modelling
  • Why Orwell’s warning feels more relevant than ever

And finally — what we can still do, even if we don’t unplug.

Prefer to listen instead?

You can hear the full narrated version of this blog as a podcast on YouTube:

The Cost of Listening: Why the Future of AI May Be Paid for in Privacy

AI has quietly — and not so quietly — entered our lives.

LLMs and generative AI aren’t just tools for techies anymore. They’re shaping how we communicate, collaborate, and even create. From helping people write letters to their MPs to composing songs, crafting Ghibli-style animations, and producing viral claymation packaging mockups… AI is becoming a creative companion.

It’s already helping people express thoughts they couldn’t find the words for, generate visuals they couldn’t draw, and build things they didn’t know they were capable of building.

In just a couple of years, it’s made a bigger cultural splash than social media did in its early days. And unlike social media, this feels deeply personal — one-to-oneinteractiveresponsive.

For many, it’s empowering. For others, it’s unnerving.

But for those funding it — it’s expensive.

OpenAI and others are running at a loss. Not because of failure — but because that was the plan.

AI is currently a loss leader — funded by billions in investment from companies who know that once the world is reliant on it, the payback will come. But building that future? It isn’t cheap.

Every chat costs compute. Every image costs GPU time. Every second of audio, every video, every API call adds to a bill that’s running into the millions.

And that’s where the shift begins. Because at some point… someone will demand a return.

Even with growing numbers of paid subscribers and API users, the maths still doesn’t work. Those revenue streams help, but they’re not enough to cover the enormous cost of running, scaling, and retraining these models. So far, they’ve only delayed the inevitable question: 

What happens when subscriptions and goodwill don’t keep up with investor expectations?

And while OpenAI recently declared it’s no longer “compute-constrained” — thanks to Microsoft loosening its grip as the exclusive cloud provider — that’s not the same as being cost-free.
The bills are still coming. They’re just arriving faster now.
(Source: Windows Central)

When Privacy Isn’t Just Personal

There’s another cost no one talks about enough: privacy.

LLMs don’t forget easily. And while the major players claim not to train on user-submitted chats without consent, the reality is more complicated — especially when those models are integrated into third-party tools or less transparent platforms.

People are pasting in contracts, medical records, evidence files, financial breakdowns — not out of carelessness, but because the AI feels safe. It listens. It helps. It doesn’t interrupt… it absorbs.

Now imagine this: A user shares sensitive details about an ongoing legal case — redacted poorly or not at all. That data gets pulled into training material downstream or used as system feedback. A year later, someone on the other side of the world is generating dialogue for a crime novel… and fragments of that real, private case slip into the story.

To the author, it’s fiction. To the original party, it’s a devastating breach.

When we treat LLMs like private journals, they begin to reflect the worst risks of data oversharing at scale — and all it takes is one seemingly helpful suggestion, one poorly monitored training pipeline, one leak, to turn help into harm.

⚖️ When Professionals Feed the Machine

It’s not just casual users sharing sensitive information with AI.

A growing number of professionals — solicitors, consultants, financial advisors, HR officers — are now using AI tools to speed up or simplify their work. Drafting contracts, summarising cases, reviewing policies. And while these tools can be powerful assistants, they also introduce powerful risks.

Just recently, an international law firm, Hill Dickinson, had to restrict access to AI tools after detecting tens of thousands of hits to ChatGPT and other AI services in a single week — with staff uploading documents in ways that did not align with the firm’s AI policy.

Even when used with good intentions, this kind of usage opens the door to unintentional data leaks — especially when staff lack proper digital literacy or assume AI tools are safe by default.

A single file upload might contain:

  • Personally identifiable information (PII)
  • Client case details
  • Redacted text easily reconstructed by the model
  • Sensitive commercial terms

Worse, if that content is later incorporated into training material — either directly or through reinforcement learning — it can resurface unexpectedly in a response to another user. Perhaps reworded. Perhaps decontextualised. But still recognisable to the wrong person.

This isn’t hypothetical. It’s happening.

(Source: BBC News)


The Stark Reality: What Comes Next

Right now, users are in a kind of digital utopia.

AI interfaces feel clean. Simple. Direct. Ask a question, get an answer — fast, articulate, ad-free. It’s a breath of fresh air compared to traditional search engines, where facts are buried beneath ads, cookie banners, trackers, sponsored results, and pop-ups.

But that simplicity won’t last.

When the money runs low — or expectations run high — something will give. It always does.

The most likely future isn’t one where AI becomes more helpful. It’s one where it becomes more profitable.

  • Expect answers laced with sponsored product suggestions.
  • Expect subtle nudges toward services and shops.
  • Expect model responses influenced by partnerships, not just knowledge.

Just like social media slowly shifted from connection to monetisation, AI may shift from serving your needs to shaping them.

Not because that’s what the tech wants — but because that’s what the shareholders demand. And it’s only going to scale unless the organisations using these tools train their people as carefully as the AI itself.

When Empathy Becomes a Product

Here’s where things get more uncomfortable.

In the rush to adopt LLMs, users have shared more personally rich, emotionally vulnerable, and context-heavy information than any social media platform could ever dream of collecting.

Not just status updates. Not just browsing history. But:

  • Grief.
  • Trauma.
  • Divorce proceedings.
  • Therapy prompts.
  • Private health updates.
  • Confessions never voiced aloud.

Why? Because AI responds with patience. It doesn’t interrupt. It doesn’t scroll past. It seems to listen — better than most people do.

But that listening has a price.

Once the drive for profit kicks in, that emotional data becomes incredibly valuable. When the model knows you’re vulnerable, it knows how to influence. When it understands your fear, it can recommend a product. When it senses indecision, it can steer a sale.

This is empathy, monetised.

Not because the AI chooses to. But because someone told it to.

And the more we entrust our deepest thoughts to a system optimised for revenue, the more likely it becomes that trust will be betrayed — not by the machine, but by the people who own it.


When Data Knows Too Much

It’s not just advertising that concerns me.

The true value of all this emotional, contextual, and professional data isn’t just in what can be sold — it’s in what can be predicted.

Insurance companies, financial institutions, recruiters, and healthcare providers could all be tempted by a new goldmine of insight — data that reveals:

  • Your stress levels
  • Your likelihood of making a claim
  • Whether you’re planning a legal battle
  • If you’re about to leave your job
  • Your emotional stability behind the wheel

It sounds dystopian, but it’s entirely possible. If AI interactions feed into broader data analytics platforms — or if your use of AI becomes part of your digital footprint — then suddenly, decisions about your credit, insurance, hiring prospects, or even healthcare access could be influenced by data you thought was private.

What happens when a bank denies your mortgage because it suspects financial instability from a prompt you entered last week? What happens when an insurer increases your premium because the tone of your AI chat suggests emotional distress?

These aren’t abstract concerns. They’re the natural consequences of feeding deeply personal information into a system designed to profile, predict, and profit.

So where does that leave us?

We’re standing at the edge of something extraordinary — and dangerously fragile. The tools we’ve embraced so quickly are reshaping how we think, speak, and share. But without safeguards, accountability, and empathy guiding their use, we risk creating a future where trust is undermined, privacy is an illusion, and decisions about our lives are based on data we never meant to share.

The Orwellian Echo

George Orwell’s 1984 imagined a world of constant surveillance — telescreens in every home, a state that listened to everything, and a population manipulated by fear and control.

What Orwell didn’t predict was that we would invite the surveillance in ourselves.

Digital assistants now sit on kitchen counters and bedside tables. We speak to them freely, casually — sometimes without even thinking. And as of 2025, Amazon is ending local voice processing on Echo devices, meaning all Alexa recordings will be routed to the cloud for AI model improvement.

(Source: ZDNet)

That means our most natural, unfiltered conversations — late-night worries, offhand remarks, arguments, vulnerabilities — are now fuel for something bigger. Not necessarily to harm us… but certainly not to protect us either.

This is the part where we must stop and ask: Just because the tech can listen… should it?

What We Do Next — Even If We Don’t Change a Thing

Let’s be honest: most people won’t throw away their smart speakers.
They won’t stop using AI.
And they probably won’t read the terms and conditions either.

And who could blame them?

Terms and conditions today are rarely written with users in mind. They’re long, dense, and buried in legalese — designed more to protect producers than empower consumers. Accepting them isn’t really a choice; it’s often a requirement just to use something you’ve already bought. That’s not informed consent — it’s forced acceptance masquerading as agreement.

But that doesn’t mean we’re powerless.

Awareness matters. So does friction. So does pushback.

We can question default settings. We can demand opt-outs. We can challenge how our data is used — and more importantly, how it’s reused.
We can ask the awkward questions, even if the answers are inconvenient.

And for those building the future? This is the time to decide what kind of future you want to be part of.

Because when history looks back at this moment —
when it writes its version of 1984 for the 21st century —
the real question won’t be what did the machines do?
It’ll be what did we allow?

Let’s Continue the Conversation

This piece isn’t meant to sound the alarm — it’s meant to spark a conversation.

Have you experienced the frustration of poorly implemented AI in your daily life? Have you noticed how much more you’re willing to share with an assistant than you ever did with a search engine? Or maybe you work in tech, legal, healthcare or finance and have seen the internal tensions AI brings firsthand.

Whatever your experience — personal, professional, hopeful, or hesitant — I’d love to hear it.

Do you think we’re headed for a better future with AI? Or are we already too far down the road of convenience over caution?

Drop a comment. Share your story. Let’s keep this human.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.