Is AI the New Anti-Christ? Why Some People Still Fear It
- Chris Howell
- Jun 28
- 4 min read
Updated: Jul 12

Over the past month or two, I’ve met some truly fascinating people and had lots of insightful conversations about AI—especially through Mercia Minds, client calls and poster drops. One theme that crops up, quietly but consistently, is fear. Sometimes even distrust.
For some, it’s a moral unease—AI feels unnatural, even unholy. For others, it’s seen as cheating, especially in creative or academic work. There’s a lingering sense that we’re tampering with something we shouldn’t.
But where does this fear really come from?
AI as a Mirror (And That’s the Scary Part)
AI is the most disruptive technology since the automobile. It’s trained on us—our writing, our voices, our art, our questions, our values. In many ways, it’s a mirror. But not an ordinary one. A supercharged version of us that can solve in seconds what might take us hours.
Maybe that’s part of the discomfort. What does it mean when a tool can do what you do—faster, sometimes better? It’s a challenge to our identity as creators, workers, even thinkers.
Fear of the Machine
Hollywood hasn’t helped. The Terminator films, Ex Machina, I, Robot, The Matrix and yes, you too Miss M3GAN —they’ve all trained us to associate AI with doom.
Every time a news story warns about "AI risk," there’s usually a picture of a glowing-eyed robot attached. Rarely an image of a spreadsheet, or a GP surgery using AI to help triage patients more quickly. Stories about AI-Doom gets more readers and clicks then AI-Benefits.
The narrative is seductive: the machine takes over, sees humans as inefficient, and wipes us out. Game Over. But fiction isn't fact.
Meanwhile, the US is Loosening the Rules
To make matters worse, some governments seem to be moving in the opposite direction of caution. In June 2025, the United States announced plans to roll back several key AI safety regulations, prioritising rapid innovation and economic competitiveness over precaution.
For people already feeling uneasy about AI, this doesn’t help. If even world powers are scaling back oversight, it feeds the fear that AI is being unleashed without enough checks—and that the worst-case sci-fi scenarios might not be so far-fetched after all.
It’s not about stopping progress. It’s about making sure progress doesn’t outrun our ability to stay in control.
Déjà Vu: Every Tech Was Once Feared
This isn’t the first time a new technology has sparked panic:
Trains: When railways first appeared, people feared that traveling at 30 miles per hour would be fatal, causing bodies to melt or minds to collapse (!). These fears proved unfounded (at least, not on my recent trip to Peterborough), and trains became a backbone of modern transport.
Telephones: Some believed telephones would transmit electric shocks or even evil spirits....and some would say 'I've have had phone calls like that!'. We were also and that they would destroy social norms. Instead, they revolutionized communication.
Television: Warnings ranged from TV ruining eyesight to rotting brains (the jury is still out on that), and some even predicted the medium would quickly fade away. Instead, television became a dominant cultural force.
CRT Monitors and Wi-Fi: Fears about radiation from CRT screens and Wi-Fi causing miscarriages or cancer were widespread, but scientific evidence did not support these claims, and both technologies became ubiquitous.
The Y2K Bug: The approach of the year 2000 sparked apocalyptic predictions that computer failures would crash planes, elevators, and the global economy. In the end, either thanks to extensive preparation or because the risk was overblown, nothing catastrophic occurred.
The Internet: Early critics dismissed the internet as a passing fad, predicting it would "catastrophically collapse." Instead, it transformed nearly every aspect of modern life.
Robots and Automation: For over a century, people have feared that automation and robots would eliminate all jobs and destabilize society. While automation has changed the workforce, it has not led to the predicted societal collapse.

Technology always comes with trade-offs. But we adapt.
Real-World Risks That Fuel the Fear
Some of the fear isn’t just about science fiction. There are real, tangible concerns:
Bias and Fairness: AI systems can amplify existing biases, leading to unfair decisions in areas like hiring, policing, or lending.
Privacy Violations: AI can process and infer sensitive personal data at scale, raising major concerns about surveillance and data misuse.
Misuse and Weaponisation: AI can be used in cyberattacks, disinformation campaigns, or even autonomous weapons, creating entirely new security threats.
Superintelligence and Existential Threats: Some fear that if AI surpasses human intelligence, it could pursue goals misaligned with human values—potentially with catastrophic results. This concern is echoed by experts and futurists alike.
Delegation and Moral Responsibility: As people increasingly rely on AI for decisions, there's a growing risk of moral detachment. If something goes wrong, it becomes easy to blame "the system" rather than hold humans accountable.
So, How Can We Ease That Fear?
It starts with honest conversations. We need to make space for people to voice their concerns without judgment—and respond with clarity, not hype. Education is part of the answer, but so is transparency: being open about how AI works, what its limitations are, and who is responsible when things go wrong.
Next, we need stronger ethical standards and regulation—not weaker ones. Tools this powerful must come with guidance, accountability, and safety nets. The more people see AI being used to improve lives with care, the more trust will grow. I can't remember the last time I saw the Government promoting how AI can prevent fraud.
And most importantly: AI should remain human-led. It should assist decision-making, not replace it. That’s how we keep values like fairness, empathy, and common sense in the loop.
We’re still here. We haven’t destroyed ourselves yet.
And no, AI isn’t the Anti-Christ. It’s not good or evil. It’s a tool. A powerful one, yes—but one that reflects the people building it, training it, and using it.
That’s why I started Mercia AI: to help people use AI wisely, not fear it. Whether you’re curious, skeptical, or somewhere in between, you’re not alone.
Want to Talk About It?
I run face-to-face events in Coventry through Mercia Minds, where you can ask anything—no jargon, no pressure. Or if you prefer a 1-to-1 chat, you can always book a non-satanic Discovery Call.