I remember when synthetic biology was supposed to be the existential threat. The conferences, the white papers, the Senate hearings filled with grim predictions of a world where DNA synthesis and gene editing would put pandemic-class pathogens in the hands of anyone with a credit card and an internet connection.
It hasn’t quite turn out that way, yet. What actually happened was a decade of incredible breakthroughs in medicine, materials, and sustainability. But the conversation didn’t pause long enough to acknowledge that. By the time synthetic biology had delivered vaccines at record speed, engineered microbes to clean up pollution, and laid the groundwork for an entirely new bioeconomy, the existential risk machine had already moved on.
The podcast audio was AI-generated using Google’s NotebookLM.
The audio is a direct reflection of the report’s content and does not reflect my full perspective, which is more fully reflected in the text.
Next up: AI. Suddenly, it wasn’t just DNA printers and CRISPR we needed to worry about, it was generative models designing bioweapons, AI-powered lab automation accelerating bad actors, and synthetic organisms emerging from computational black boxes. The cycle repeated. White papers, hearings, doomsday scenarios. And yet, here we are. AI has indeed transformed biotech, but mostly in ways that have accelerated drug discovery, pandemic preparedness, and biological manufacturing, all while the direst predictions remain unproven so far.
And now? Now, we’ve arrived at Mirror Life.
The latest Science paper, Confronting Risks of Mirror Life, and its accompanying technical report, outline an elaborate case for why mirror bacteria, a hypothetical class of organisms built from the reverse-chirality versions of biomolecules found in all known life, could become a biosecurity risk. The argument is compelling in its detail: mirror bacteria would be immune to existing phages and antibiotics, could evade human and animal immune systems, and, if released into the environment, might become an unstoppable invasive species with no natural predators.
It is, in other words, the same existential warning, just updated for 2025. And I find myself hesitating. Not because I don’t respect the science. Not because I don’t believe in risk assessment. But because I am growing weary of this cycle, this treadmill of technological fear that churns through one apocalyptic prediction after another, rarely pausing to validate what actually happens when these technologies develop in the real world.
I wanted to dive deeper into this topic for you. I was recently introduced to the concept of Shock Doctrine by Naomi Klein and I found it compelling to frame how I see much of this doom-cycle in the life sciences.
I wrote you an Exclusive for Subscribers, long-form piece, titled “Shock Doctrine in the Life Sciences - Why We Need to Right-Size Fear in Biotech”.
It's >10,000 words, so I’m sharing here rather than yet another email directly to your inbox, but I’d welcome your thoughts and feedback. If you’re tired of sound bites:
The Recurring Problem of Premature Panic
One of the great ironies of our time is that we are living through an era of unprecedented technological acceleration, and yet our ability to have rational, measured conversations about risk remains as flawed as ever. We don’t track risks over time. We don’t measure how concerns evolve. Instead, we throw all our energy into speculative debates at the front end, before we have enough empirical evidence to even assess whether these risks are realistic, and then forget about them in the end.
Look at the last two cycles:
Synthetic Biology Panic (2010s–Early 2020s)
DNA synthesis and CRISPR were supposed to democratize biological threats.
Instead, so far, they have transformed medicine and agriculture while actual cases of synthetic bioterrorism remained nonexistent.
AI & Biosecurity Panic (2020s–Present)
AI was supposed to put pathogen design into the hands of anyone with a laptop.
Instead, so far, AI has accelerated biological discovery and biosurveillance far more than it enabled harm.
Now, the cycle repeats. Mirror Life as the new existential risk.
And yet, we have no systematic way of assessing whether these fears materialize over time. The conversations explode, then dissipate. The world moves on. We never take stock of whether the initial concerns were valid, overblown, or simply misdirected.
So the question I have now, as I watch this newest phase of concern unfold, is this:
Are we actually learning anything from these cycles, or are we just chasing the next big fear?
This is far more a question about our process of addressing these concerns that any one topic of concern in particular.
The Case for Continuous Monitoring, Not Cyclical Panic
I’m not saying mirror life isn’t worth studying. It likely is. The potential for mirror-image biology to evade immune responses is an interesting immunological challenge. The idea that mirror bacteria could be resistant to natural phages and microbial competition makes for a fascinating ecological thought experiment. But I am not yet convinced that this is the next existential threat, nor do I think we should treat it as such until we have more empirical data.
If the past decade has taught us anything, it’s that the world needs a better way to engage with emerging technologies than jumping from one existential debate to the next. What would that look like?
First
We need long-term, empirical tracking of technology risks, not just speculative models. We should be measuring how synthetic biology, AI, and now mirror life actually evolve, rather than front-loading the entire policy discussion with untested hypotheticals.
Second
We need comparative risk assessment. Mirror life isn’t the only biological unknown out there. If we’re worried about immune evasion, we should compare it to prions, antibiotic-resistant bacteria, and other naturally occurring pathogens. If we’re worried about biocontainment, we should examine what’s already being done for other synthetic organisms rather than acting as if this is a wholly new category of risk.
Third
Most critically, we need exit ramps for outdated concerns. If mirror life never materializes as a serious risk, how do we wind down the policy conversation? How do we redirect resources toward threats that have proven to be real rather than hypothetical?
Final Thoughts
I have no doubt that mirror life will continue to be a topic of interest in synthetic biology and biosecurity circles. But I am not ready to buy into the panic just yet.
We can do better than this cycle. We have to. Maybe this time, we reflect on our process as much as we reflect on mirror life.
Cheers,
-Titus
Reminder: I wanted to dive deeper into this topic for you. I was recently introduced to the concept of Shock Doctrine by Naomi Klein and I found it compelling to frame how I see much of this doom-cycle in the life sciences.
I wrote you an Exclusive for Subscribers, long-form piece, titled “Shock Doctrine in the Life Sciences - Why We Need to Right-Size Fear in Biotech”.
It's >10,000 words, so I’m sharing here rather than yet another email directly to your inbox, but I’d welcome your thoughts and feedback. If you’re tired of sound bites, Read Now.
Share this post