Skip to content

Who Owns Your Voice?

The Hidden Risks of Writing in the Age of AI

TL;DR – Who Owns Your Voice?

In the digital age, your writing style is more than just expression, it’s a biometric fingerprint.

From old-school plagiarism detection to modern-day stylometry, your unique use of grammar, phrasing, and rhythm can identify you… or be used to impersonate you. As AI tools become more powerful, they can clone your voice, flatten your individuality, or even forge messages in your style with uncanny precision.

Professionals are increasingly outsourcing their communication to tools like ChatGPT and Copilot. But as everyone starts to sound the same, we lose the ability to spot genuine human voices, and authenticity becomes a liability.

This post explores:

  • The rise of stylometry in OSINT and cyber forensics
  • The paradox of using AI to anonymise, or mimic, ourselves
  • Real risks of voice theft, impersonation, and linguistic deepfakes
  • Future tools like watermarking and “style locks” to protect authorship
  • The trade-off between privacy and personality in an age of homogenised writing

Final thought?
If your voice can be weaponised, maybe it’s time to rethink how, and where, you use it.

Prefer to listen?

“That’ll Never Work…”

The practice of identifying someone by their writing style, now known as stylometry, was first formalised in 1890 by Polish philosopher Wincenty Lutosławski. He applied mathematical analysis to Plato’s texts, unknowingly laying the foundation for a discipline that would evolve into a critical tool for digital forensics, authorship attribution, and even cybersecurity.

Of course I’d never heard of Wincenty when I started out.

Back in the early ’90s, I floated an idea to my mentors that you could identify someone, even if they changed their name or account, by their writing style.
Not just what they said, but how they said it:

  • The same odd spelling mistakes
  • Their choice of punctuation…
  • Grammatical Style
  • The way they used certain phrases or pet words

They laughed. Said it was fanciful. “Not a real use case.”

At the time, I was working on my dissertation as a student. Plagiarism was a known issue, but not at the same scale we see today (Or at least detection).

In 1990, most of the public hadn’t even heard of the internet. Dial-up BBS systems and university terminals were the domain of geeks and techies.

During that time, a fellow student approached me and asked for help with an assembly language task set by the lecturer. We got on well, and for me, assembly was trivial, I used to dream in opcodes. I agreed, wrote the solution, and explicitly told him to change the labels and maybe reorder some of the routines.

Imagine my surprise when the lecturer pulled me aside and asked if I’d written the code for him. He had reviewed both our submissions, and it was obvious. The other student hadn’t changed a thing. The lecturer showed me the similarities and said, in no uncertain terms, that the optimisation level, structure, and style were unmistakably mine.

I think that was the moment the lightbulb went on, the idea that you could identify someone by the unique signature of how they write, whether code or prose. Back then, it was printed work and terminal outputs, not vast datasets scraped from public blogs or forums. Most of us didn’t even have access to the DEC VAX or HP System V machines unless you were lucky.

Turns out, I wasn’t wrong. I was just early. I was describing stylometry long before I knew the word for it, and today, it’s not only real, it’s weaponised.

Stylometry’s transition from literary analysis to cyber forensics started in the early 2000s, as online content exploded. By the mid-2010s, it had become a recognised tool in OSINT, used to unmask pseudonymous threat actors, trace misinformation campaigns, and attribute digital footprints across forums and dark web activity.

Stylometry: The Fingerprint in Your Words

Stylometry also plays a growing role in OSINT (Open Source Intelligence). Patterns in writing, especially across forums, blogs, and social media posts, can help analysts connect pseudonymous activity to known identities. For threat hunters, intelligence officers, or investigative journalists, style can be just as revealing as IP logs or metadata.

Today, forensic linguists and AI researchers alike use writing style as a biometric of sorts. People can be:

  • Traced across forums
  • Linked to anonymous sock puppet accounts
  • Even prosecuted in court based on linguistic fingerprints

It’s not magic. It’s maths meets human habit:

  • How often do you use a semicolon?
  • Do you double space after full stops?
  • What uncommon words crop up again and again? (“Moratorium,” in my case. Picked it up in a 90s training session and never let it go.)

I’ve worked in cybersecurity long enough to know that data isn’t just what’s typed, it’s how it’s typed. And when you’re trying to unmask a threat actor, this level of nuance can make or break a case.

🔍 Stylometric Example: What Does “Voice” Really Mean?

To show how stylometry works in practice, here’s a simple paragraph written in my natural voice:

Original (Jason-style):
Bear with me, because this might sound ridiculous… I don’t think we’re far off the day someone gets “cancelled” by an AI for impersonating themselves. Honestly, the irony writes itself. You spend 10 years writing blog posts, comments, emails, all laced with your quirks, your phrasing, your punctuation habits, and then some faceless LLM comes along, trains on it, and suddenly you get flagged for sounding like a bot.

Moratorium on logic, apparently.

Now here’s how that same paragraph might look if a fraudulent actor trained an AI to mimic me:

AI-Style Imitator:
Okay, hear me out, this might sound strange, but we’re probably not too far from a world where someone gets flagged or even banned for using their own writing style. The paradox is almost poetic. You’ve written thousands of words across the internet, and now some algorithm thinks you’re the copy.
You couldn’t make it up.

It sounds convincing. But there’s something… off. Slightly too polished. Slightly too general. The sarcasm is dampened. My pet phrase “moratorium” is gone. The rhythm is synthetic.

Now here’s what happens when AI rewrites the paragraph for corporate polish:

AI-Neutral Rewrite:
It’s conceivable that, in the near future, individuals may be mistakenly flagged for AI impersonation based on their own writing style. As people generate large volumes of online content, it becomes increasingly difficult to distinguish between original authorship and AI-generated replication.
Why this matters? Because it presents new challenges for digital identity, trust, and the ability to verify authorship in online environments.

That version? No voice. No soul. No human edge.

That’s what I mean when I say your writing has a fingerprint, and when you polish too hard or let others mimic it, you risk erasing that fingerprint entirely.

AI as a Mirror, a Shield… or a Mask

Here’s where it gets complicated.

We’re now in an era where tools like ChatGPT, Claude, and Gemini are no longer just novelties, they’re quietly becoming extensions of how we think and communicate. More professionals than ever are outsourcing parts of their voice to AI:

  • Polishing LinkedIn posts to sound more executive
  • Drafting emotionally neutral emails to avoid misinterpretation
  • Turning raw thoughts into structured prose with just enough polish to impress

I’ve done it. You’ve probably done it. The appeal is obvious, it saves time, reduces friction, and removes the second-guessing we all experience when writing in professional settings.

But here’s the problem:

The same tech that polishes your message…
Can also replicate you with unnerving precision, quirks, phrases, cadence, and all.

It doesn’t need access to your email account or your files. It just needs your voice.
And if enough of your writing is out there, blog posts, comments, helpdesk tickets, product documentation, Slack messages, that’s all it takes.

Imagine this:

  • A phishing email, written exactly how you’d write it
  • A fake resignation letter in your tone, addressed to your boss
  • A support comment impersonating you on GitHub, quietly sabotaging your professional standing

It’s not science fiction. It’s social engineering with a linguistic payload.

And here’s the twist: the better your writing, the easier it is to train a model on it.

In that light, AI isn’t just a productivity tool.
It’s a mirror that reflects your style,
a shield that can mask it…
and a mask that others can weaponise against you.

But here’s the emerging paradox: as more people rely on Copilots, assistants, and generative tools to draft or polish their work, we may reach a point where almost everything starts to sound the same.

Professional emails, announcements, job posts, all increasingly indistinguishable, regardless of whether they were written by a human, a bot, or a threat actor.

And then it gets even more complicated:

  • If everything sounds AI-polished, what does authenticity even look like?
  • Should we deliberately flatten our public-facing writing to protect ourselves… and keep our true style for internal comms and trusted circles?
  • Do we want to live in a world where expression itself becomes an operational risk?

These aren’t rhetorical questions, they’re forensic ones. Because if your writing becomes the new biometric, then the case for preserving and protecting your voice isn’t just artistic… it’s strategic.

The Great Flattening: Is AI Killing Our Voice?

I’m noticing a trend in professional spaces, everyone is starting to sound… the same.

Whether it’s LinkedIn posts, internal memos, or “personal” newsletters, there’s a creeping uniformity to it all:

  • Polite but passive
  • Polished but sterile
  • Emotionally neutral and… vaguely Silicon Valley, even when you’re in the UK.

It’s ironic, really. At a time when we’re told to “find our voice”, the tools we’re using to write are sanding off the very edges that make us recognisable. Identity becomes a casualty of convenience.

Some people lean on AI for speed.
Some for safety, afraid of being judged or misunderstood.
Others deliberately use it to break their own pattern, making themselves harder to trace via stylometry or OSINT. And in some cases, that’s a smart play.

But here’s the cost: the more we let AI rewrite us into sameness, the more we erode our ability to spot genuine human expression.

If all writing feels AI-generated… how will we know what’s real?

This isn’t just a literary concern. It’s an operational one, for fraud detection, for digital forensics, for trust.

And I’ve seen both sides. I’ve watched people weaponise stylometry to catch impersonators… and I’ve watched others erase their own voice to avoid detection.

Privacy or Authenticity? The Modern Writer’s Dilemma

So what do we do?

On one hand:

  • Stripping out your personal style can shield your identity
  • Especially if you’re a whistleblower, a survivor, or someone who lives with persistent digital risk

On the other:

  • Flattening your voice can mean losing a core part of yourself
  • And in the worst case… it can allow someone else to steal your voice and use it more convincingly than you can

It’s a psychological and operational paradox:

  • If you write like yourself, you’re fingerprintable
  • If you write like everyone else, you become disposable

That tension, between safety and self, is only going to intensify.
We’re being pulled between the need to be expressive… and the need to be untraceable.

There was a time when voice was power.
Now, voice is a liability.
And that should give us pause.

What Comes Next: Watermarking, SEALs, and Identity Arms Races

The arms race is already underway.

We’re seeing:

  • Stylometric detectors trained to flag AI-generated text
  • Watermarking technologies in development to cryptographically prove authorship
  • SEAL AI (Self-Evolving Autonomous Language) agents that evolve their linguistic patterns to appear human… or deliberately avoid appearing as anyone at all

But here’s the kicker:
Even watermarking won’t help you if someone trains a model on you.

Because at that point, it’s not just AI-generated content, it’s you-style content, forged from your past.
Your voice… cloned, manipulated, and deployed.

That’s why the next wave of cybersecurity may look a lot more like linguistics:

  • Stylometric keys
  • Authorship fingerprints
  • Voice integrity checks baked into your digital comms stack

We might end up with a future where writing tools include “style locks” or “forensic beacons” to confirm authorship.

Or perhaps internal comms will default to verified identity, while public comms become a kind of “homogenised safe mode”.

This isn’t just about deepfakes and phishing anymore.
It’s about preserving your linguistic identity in a world where imitation is effortless, and detection is falling behind.

Closing Thoughts: Own Your Voice… Before Someone Else Does

If there’s one thing I’ve learned over the years, it’s this:

Technology changes, but human pattern doesn’t.

We leave traces. In how we walk, talk… and write.
The digital age has simply made those traces easier to collect, mimic, and exploit.

Whether you’re a techie, a teacher, or a tired manager trying to polish your outbox, remember this:

AI is a brilliant tool… but your voice is still yours.
Don’t give it away too easily.
And be damn careful who gets to train on it.

If you’ve made it this far, I’d love to hear your take:

  • Does this all feel too far-fetched, or eerily plausible?
  • Have you seen stylometry or voice mimicry used, for good or harm, in your professional world?
  • Are you actively flattening your style to stay safe… or working to keep your voice distinct?

Let me know in the comments, or drop me a message if you’d prefer to chat quietly.

And for those wondering about tools like BioCatch, yes, there are already platforms capturing behavioural biometrics like typing rhythm, speed, and inter-key latency. They can even tell the difference between how you type a ‘t’ versus an ‘e’. But those traits are ephemeral. They live at the point of input, not in the final message. Your writing style, on the other hand, lives on. And that’s where the real risk lies.

Let’s start a conversation… before someone else speaks in your name.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.