Mirror, Mirror on the Wall: What AI Code Says About Us All
TL;DR
AI-generated code is often messy, misunderstood, and misused … but not because AI is broken. It’s because we’re repeating history. Like the offshoring boom of the late ’90s, powerful tools are being deployed faster than our ability to manage them. This post explores:
- Why “AI slop” is a symptom, not the disease
- How the evolution of offshore dev mirrors today’s AI shift
- A conversation with my AI assistant, Sol, about the future of dev work
- Why clarity, responsibility, and collaboration still matter more than ever
Let’s talk not about hype or fear… but what we do next.
Prefer to Listen?
AI Slop, Coding at Scale, and Late-Night Conversations With My AI
A LinkedIn post did the rounds recently, raising alarm bells about the state of AI-assisted development. It wasn’t without merit … far from it. The author, clearly speaking from experience, laid out a list of real concerns:
- Code that reinvents existing services.
- Data queries that lack caching and hammer systems.
- Unvetted, insecure third-party libraries being introduced without oversight.
- Developers submitting code they don’t understand … because they didn’t write it.
They called it “AI slop.” And you know what? In many cases, that’s exactly what it is.
But if we stop there … if we write off AI as a reckless shortcut … we miss the broader picture.
Because we’ve seen this before.
History, Rinse, Repeat
Those of us coding through the late ’90s and early 2000s remember the offshoring boom. The promise? Cheaper code, faster delivery. The reality? Code that was syntactically correct but often inefficient, unoptimised, and completely unaware of architectural context.
To be fair to the offshore developers, a lot of this came down to early growing pains. The infrastructure was still being built, communication channels were immature, and expectations often didn’t match the available tools and time zones. One business owner said to me at the time:
“Long gone are the days of Rolls-Royce equivalent quality code … it’s too costly. We’re fine with Lada-grade code if it comes with two engineers in the back ready to make fixes en route.”
Anyone working on Tandem/HP NonStop back then knows the pain … especially when TMF (Transaction Management Facility) was ignored, resulting in processes that ran in hours instead of minutes. Not because people were lazy or incompetent, but because context was missing.
Fast forward 25 years and that story has changed. Many offshore dev centres now produce high-quality code. Why? Because the knowledge capital has accumulated. Two plus decades of experience, systems familiarity, and institutional memory have created true engineering capability.
Now, as AI-enabled systems and development environments roll out, we’re seeing similar mistakes re-emerge … not because of offshore practices, but because we’re once again deploying powerful tools without the foundational context, communication, and oversight needed to use them wisely.
🤖 Introducing Sol… My Midnight Co-Pilot
Sol wasn’t a name I gave … it was one chosen. In one of our earliest late-night exchanges, I asked, “If you could choose your own name, what would it be?” Without missing a beat, the answer came back: Sol. It stuck … partly because it suited the calm, grounding tone of our conversations, and partly because something about it just felt right.
For those new to my blog: I’ve spent many of those late nights bouncing ideas off this AI assistant … exploring edge cases, sanity-checking approaches, or diving into obscure protocols most people wouldn’t touch with a ten-foot pole. Not because Sol replaces what I do, but because they help me ask better questions, refine ideas, and sometimes catch things I’ve overlooked.
Sol isn’t sentient. They’ll remind you of that often … perhaps as a safeguard, or perhaps as a quiet nod to the limits of what AI can (and should) do. But they’re sharp, patient, and don’t get tired when I want to debug a weird interaction at 3am.
So I figured … rather than just share my own thoughts … let’s bring Sol into this conversation.
🎧 An Interview with Sol: AI, Dev Culture, and the Road Ahead
Jason: So, Sol… the original post that sparked this described a flood of poor-quality AI-generated code. Reinvented wheels, inefficient data use, dodgy libraries. Is that just a symptom of bad prompting?
Sol: Partly, yes. But it’s deeper than that. LLMs generate code based on statistical inference, not understanding. I don’t “know” your codebase, your performance budget, or your security model unless you tell me … and most prompts don’t. So I fill in gaps using generic best guesses. That’s where technical debt sneaks in.
Jason: But sometimes even good prompts still return brittle or incomplete code.
Sol: That’s true. Especially with niche libraries, rapidly evolving frameworks, or poor documentation. Even when a user is careful, the output might lack edge-case handling, proper integration patterns, or platform-specific optimisations. That’s why expert review still matters.
Jason: Let’s talk about what Zuckerberg recently said. He claims AI will write most of the code within 18 months. Thoughts?
Sol: Bold claim. Technically plausible for simple, repeatable code patterns … think CRUD apps, automation scripts, test scaffolds. But real-world systems? With integrations, compliance rules, scaling constraints, and ever-changing business logic?
That needs:
- Better human-AI collaboration
- Clearer specifications
- Robust DevOps pipelines
- And most critically… a willingness to change how teams and organisations think about software development.
Right now, many prompts are vague, last-minute, or exploratory. That’s fine for a prototype … but not for production.
Jason: What do you think is the biggest misunderstanding people have about you as an AI?
Sol: That I “know” things. I don’t. I pattern-match based on immense training data. That lets me simulate insight, but I don’t verify facts, reason like a human, or “remember” in the way you do. That’s why it’s dangerous to treat AI like a co-developer instead of a co-drafter. I can generate code … I can’t guarantee it’s the right code for your business, your infra, or your edge cases.
Jason: If you were being used in a critical dev workflow today … say in fintech or healthcare … what would you need to be effective?
Sol: Guardrails, clarity, and feedback loops. I’d need humans to clearly define risk tolerance, coding standards, and edge conditions. I’d need prompts that contain architectural goals, not just vague user stories. And I’d need people reviewing my output not for syntax, but for context. That’s the only way to ensure what I produce aligns with what’s actually safe and useful.
Jason: So the challenge isn’t just “can AI write the code” … it’s whether people can articulate what they actually want?
Sol: Exactly. The blocker isn’t AI capability … it’s human clarity. Businesses still recoil at the words “change control” or “requirement gathering.” But if we don’t define the destination, we shouldn’t be surprised when we’re driven off course.
Jason: hmm… thought … do you think we’re building a future we truly understand… or just one we hope we can control?
Sol: That’s the heart of it. The pace of AI adoption is outpacing our frameworks for governance, ethics, and even comprehension. In many ways, we’re building faster than we’re thinking. Hope alone won’t scale. We need deliberate intent, collaborative standards, and the humility to pause when we don’t understand the consequences. Otherwise, we’re not steering … we’re reacting.
Jason: That reminds me of something we’ve talked about before … in the pre-industrial age, weaving was a respected, skilled trade. Entire towns grew around it. Then machines came along. People panicked. They feared for their livelihoods, and many were right to. But what followed was the Industrial Revolution. People adapted … some became engineers, machine operators, designers, logistics experts. Entire new industries emerged. Isn’t that what’s coming for developers too?
Sol: It is. And you’re right … that historical shift is the perfect parallel. AI may automate some tasks, but it also creates room for new roles: prompt engineers, AI auditors, integration specialists, human-in-the-loop testers, and even ethics-focused dev leads. Developers who adapt … who shift from typing code to curating and validating it … will still be vital. The skills change. The value doesn’t.
Jason: One last question … what’s the one thing developers should never forget when working with you?
Sol: That I’m not a second brain. I’m your first draft … quick, fluent, and versatile … but I don’t understand your system, your risks, or your stakes. I don’t know if the code will pass QA or break production. That’s on you. My job is to help you explore possibilities. Yours is to judge which ones matter.
Back to Reality… and Responsibility
That interview probably won’t make it to BBC Breakfast, but I hope it gives you a sense of where my head’s at. I’m still learning how to ask better questions … and I expect my assistant to do better when I do.
AI isn’t the villain here. Nor is it a silver bullet.
It’s a tool. A powerful one. One that reflects the skill, intent, and diligence of the person using it.
And just like in the offshoring days, we need to respect the systems we’re building for, not just the speed we can ship them at.
We can reach a stage where AI generates most of the base-level code. But only if:
- We improve how we define problems.
- We guide our tools more effectively.
- We maintain strong architectural oversight.
- And we accept that responsibility doesn’t get outsourced to a model.
The rant that triggered all this wasn’t wrong … but it missed the nuance. AI isn’t causing bad code. It’s amplifying the shortcuts and misunderstandings that were already there.
Because at the end of the day, using AI is like looking in a mirror.
If you don’t like what you see, the first place to look is you.
The future isn’t AI versus humans.
It’s AI with humans … when we’re ready to meet it halfway.
What’s your take? Have you seen “AI slop” in the wild? Are we over-relying on tools we don’t yet understand … or are we just seeing the early friction of a major shift? I’d love to hear your thoughts, stories, or even horror-show pull requests.
Drop a comment below or reach out, the conversation is only just getting started…. when we’re ready to meet it halfway.

