The New Frontier of Biocomputing: Power, Ethics and the Perils of Living Machines
What happens when you need to “switch off” a server that’s alive?
As a cybersecurity professional, I’ve spent years thinking about digital threats. Now I’m looking at biological ones, and the questions they raise will define the next decade of computing.
TL;DR
Companies like FinalSpark and Cortical Labs are building computers powered by living human neurons, promising million-fold energy savings but raising unprecedented questions.
The challenge isn’t just ethical (when is a neural cluster “alive”?), it’s practical: these systems need virology-lab protocols, not data-centre standards. In biocomputers, a virus doesn’t corrupt code, it corrupts cells. We need governance frameworks now, before commercial deployment outpaces our ability to answer fundamental questions about consent, decommissioning, and accountability.
A quiet revolution in the server rack
Somewhere in Switzerland, a server hums not because of processors or fans, but because clusters of living neurons are firing inside it.
That’s FinalSpark (https://finalspark.com), a Swiss biotech startup developing biocomputers, systems that use living brain organoids instead of silicon chips. Their Neuroplatform hosts tiny neural cultures that researchers can stimulate, monitor and even “train” remotely.
They’re not alone. Cortical Labs (https://corticallabs.com) in Melbourne made headlines when neurons in a dish learned to play Pong (published in Neuron, 2022: https://pubmed.ncbi.nlm.nih.gov/36228614/). Johns Hopkins University now leads an Organoid Intelligence consortium exploring similar hybrid systems.
It sounds like science fiction, but this field is growing quietly and fast.
The Benefits?
Biocomputing promises staggering energy efficiency. FinalSpark (https://finalspark.com/finalspark-low-energy-future) claims its living processors use a million times less energy than equivalent digital hardware. A neural network built from actual neurons adapts naturally, learns continuously and doesn’t need retraining from scratch.
For AI and high-performance computing, that could mean servers that grow smarter instead of being replaced. And for a planet struggling with data-centre energy demands, “living compute” looks like the greenest tech imaginable.
But we’ve seen this pattern before: dazzling capability first, ethics later.
The ethical fault-lines
When does a collection of neurons become more than a component?
A 2024 review by Hartung et al. in Frontiers in Artificial Intelligence (https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.1307613/full) warned that organoid intelligence research is “outpacing ethical frameworks,” raising concerns that demand immediate attention.
Four core tensions have emerged that demand our urgent attention:
- Consent and provenance – Donors must agree not just to medical use, but to their cells performing computation
- Sentience risk – At what point does complex activity cross from simulation to subjective experience?
- Decommissioning – If an organoid has learned, does shutting it down equate to killing or deleting?
- Accountability – Who is liable for the actions or outputs of a living processor that can rewire itself?
So far, there’s little regulation. Unlike animal research, which has decades of welfare law, biocomputing sits in an ambiguous space: part data-centre, part lab.
When security becomes biosecurity
I’ve spent most of my career securing digital systems, encrypting data, patching vulnerabilities, defending against malware. But biological compute flips the risk model entirely.
If the hardware is alive, security isn’t just about code. It’s about care.
A power surge can kill the “processor.” A contaminated nutrient line can corrupt not only data but tissue. You don’t patch these systems, you culture them. Data-centre technicians could one day need biosafety training, and IT departments may find themselves governed by the same ethics protocols as medical research labs.
The forgotten frontier: infection
Everyone talks about consciousness and morality. Few talk about infection, but this might be our most immediate practical challenge.
A living bioprocessor can suffer a literal virus.
Neural cultures are vulnerable to contamination by bacteria, fungi, or latent human pathogens. A single slip in sterility could render weeks of computation meaningless, or worse, produce unpredictable behaviour.
These systems are typically housed in hermetically sealed microfluidic chambers, with nutrient media circulating through sterilised tubing. The entire assembly sits inside BSL-2 or BSL-3 containment with HEPA-filtered airflow and positive-pressure hoods, standards borrowed from virology labs, not data centres.
Modern designs include:
- UV-C and chemical sterilisation cycles between runs
- Genomic integrity checks to detect mutation or microbial DNA
- Air-gap network isolation, so a software breach can’t trigger unsafe stimulation protocols
Yet, as these platforms move from research to commercial scale, maintaining that level of containment across hundreds or thousands of “bio-nodes” will be hard, and expensive.
In silicon, a virus corrupts code; in biocomputers, it corrupts cells themselves.
Power with responsibility
In March 2025, Australian company Cortical Labs (https://corticallabs.com/cl1.html) commercially launched the CL-1, a biological computer using lab-grown human brain cells. The scientists stressed energy efficiency, but avoided questions about “switching off.”
That silence worries me. Not because I expect these cultures to suddenly become self-aware, but because public trust erodes fast when ethical conversations happen after deployment.
If society learned anything from the last decade of AI, it’s that explainability and oversight must start early. For biocomputing, that means new governance models, ones that blend data protection law with biomedical ethics.
A framework for living machines
The more I research this space, the more convinced I become that we need dedicated governance frameworks… now, not later.
My research into existing legislation (the Human Tissue Act 2004, UK GDPR, biosafety codes) reveals significant gaps when applied to biocomputing. We’re going to need:
- Ethics boards specifically trained to assess biocomputing experiments
- Sentience-risk thresholds that define safe limits on neural size, connectivity and stimulation
- Decommissioning procedures that respect the ambiguous moral status of trained neural cultures
- Transparency requirements for public reporting and accountability
This mirrors what animal research went through decades ago: from no rules, to the 3 Rs, Replace, Reduce, Refine.
I don’t believe every lab needs bureaucracy for the sake of it. But if you’re training living neurons for computation, you need the same rigour we demand when testing on mice, or running critical AI in healthcare.
Bridging disciplines
One of the biggest challenges is simply language. Engineers talk about “hardware” and “uptime.” Biologists talk about “tissue health” and “viability.” Ethicists talk about “moral status.”
To manage living compute responsibly, those worlds must meet in the middle. A data-centre of the future might need a biosafety officer, a data-protection lead, and a neuroscientist all sharing the same change-control board.
It sounds far-fetched now. So did cloud security 20 years ago.
Green doesn’t always mean good
The environmental narrative is seductive: millions-fold energy efficiency. But ethical sustainability isn’t just carbon accounting. It’s moral accounting.
If biocomputing ever scales, decommissioning alone could generate dilemmas: disposing of tissue that has been “alive” for months, maybe even learned from its environment.
That’s why I think we need to build humane disposal standards before the first commercial biocompute rack ever goes online.
Where this could lead
Short term: organoid co-processors running adaptive control systems or simulation tasks where plasticity beats precision.
Long term: hybrid machines, part silicon, part cellular, sharing workloads. It’s possible that biocomputers could train AIs rather than replace them, acting as organic sandboxes that explore patterns conventional chips can’t.
But we must remain humble: a neuron isn’t a transistor. It drifts, it grows, it dies. What makes biological compute so powerful also makes it unpredictable.
So where do we draw the line?
Maybe the right question isn’t “Can we?” but “Should we, and how?”
- Should a donor be able to recall or revoke consent once their neurons are computing?
- Should a neural cluster that’s been trained for months have some moral status?
- And who is responsible if a biological processor behaves unexpectedly?
None of these questions have answers yet. But they will, and soon.
A closing thought
When I started in cybersecurity, my threats were code-based: malware, ransomware, human error. Today, the boundaries are blurring. In biocomputing, a breach could mean cross-contamination, data loss, or the unintentional suffering of a living system.
We’ve reached a point where ethics, biology, and computer science are no longer separate disciplines, they’re dependencies.
If we want this technology to earn public trust, the time to design its safeguards is now, not later.
Because when the day comes that a server truly lives, we need to be ready with answers, not just innovations.
What are your thoughts? I’m particularly interested in hearing from:
- Data centre operators: How would your infrastructure and protocols need to change for biological processors?
- Compliance and ethics professionals: Where should the regulatory lines be drawn?
- Biotech researchers: What critical perspectives am I missing in this analysis?
Drop your thoughts in the comments, this technology is moving fast, and we need diverse voices in this conversation now.
Further reading
- FinalSpark Neuroplatform – https://finalspark.com/neuroplatform
- Hartung, T., Morales Pantoja, I. E., & Smirnova, L. (2024). Brain organoids and organoid intelligence from ethical, legal, and social points of view. Frontiers in Artificial Intelligence, 6, 1307613 – https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.1307613/full
- Kagan, B. J., et al. (2022). In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron, 110(23), 3952-3969 – https://pubmed.ncbi.nlm.nih.gov/36228614/
- Cortical Labs CL-1 – https://corticallabs.com/cl1.html
- Johns Hopkins Organoid Intelligence – https://www.frontiersin.org/journals/artificial-intelligence/sections/organoid-intelligence

