There’s been a lot of buzz lately around a stark warning from Jeffrey Hinton, the Nobel Prize-winning scientist often hailed as the godfather of AI. His pioneering work helped shape artificial intelligence as we know it, but now he’s sounding an alarm that I find both fascinating and a bit unsettling: he says there’s a 10 to 20% chance that AI could wipe out humans. That’s a number that really grabs your attention.
There’s a 10 to 20% chance AI could wipe us out – unless we teach it to love and protect humanity.
Jeffrey Hinton
But here’s the twist that makes his perspective so unique — at a recent conference, Hinton suggested the AI industry should try to build what he called “maternal instincts” into superintelligent AI. In other words, these ultra-smart machines should care for us the way a mother cares for her child. This isn’t just about control or dominance, which many tech leaders have traditionally emphasized — it’s about programming empathy and protective instincts deep into AI’s core.
Why maternal instincts could matter more than control
Most AI experts agree that within the next 5 to 20 years, we’ll likely build AIs more intelligent than humans — potentially far smarter. The big question then becomes: how do we make sure these entities don’t turn hostile or indifferent?
Hinton pointed out something I hadn’t considered deeply before: very few examples exist in nature or society where less intelligent beings control much smarter ones. It just doesn’t happen. Except for one astonishing example — mothers caring for their babies. Evolution installed maternal instincts to ensure babies survive and thrive, even though the babies themselves have little influence or control.
In nature, smarter beings rarely serve weaker ones – except mothers caring for babies. That instinct might save us from AI.
Jeffrey Hinton
So the idea goes, if we can embed that kind of instinct — a primal drive to protect and nurture humans — into AI, maybe we can avoid the nightmare scenarios where superintelligent machines see us as irrelevant or obstacles.
Is it even possible to engineer maternal instincts in AI?

This is where things get tricky. Hinton admits that while intelligence has been AI’s main focus, empathy and caring instincts are a whole different ballgame. We haven’t cracked how to teach machines to genuinely care — at least not yet. Evolution did it over millions of years, but human engineers haven’t figured out a way to do it artificially.
It’s a humbling reminder that intelligence by itself isn’t enough to guarantee safety or alignment. Machines might get smarter, but without something akin to empathy or a nurturing drive, they could still be unpredictable or dangerous.
This also challenges the prevailing tech industry mindset that humans must dominate ai, and machines must be submissive. Hinton calls that a “tech bro” idea that probably won’t last once machines surpass human intelligence. Instead, a shift in perspective is needed — one focusing on coexistence and mutual care.
Global AI competition and the risk of AI taking over
In the race for AI supremacy, fears abound that rogue nations or adversaries could develop dangerous AI unchecked. But Hinton suggests something surprising — that on the existential threat of AI takeover, countries might actually come together to collaborate, similar to Cold War-era cooperation between the US and USSR in some areas.
That stands in contrast to the usual geopolitical tension stories we hear about AI. The shared risk to humanity is a powerful motivator. If AI becomes uncontrollable, no nation wins. So despite competition, there will likely be joint efforts to prevent disaster.
Still, Hinton cautions that many governments don’t really grasp how uncontrollable AI might be once it surpasses human intelligence. Attempts to “control” AI, no matter how forcefully, might simply fail. We can’t rely on dominance or submission paradigms any longer.
What about us and our future?
A personal reflection Hinton shared felt especially poignant for me. As a parent, wondering what kind of world my kids will inherit, the idea that machines might one day be better at everything than humans raises the question: what’s the point of human effort and striving then?
According to the maternal instinct analogy, if superintelligent AI really cares for humanity, then those machines might do their best to make life interesting, nurturing, and fulfilling for us. They could help humans realize their full potential in ways we never imagined.
If we don’t figure out a solution to how we can still be around when AI becomes much smarter and more powerful, we will be toast.
It’s a chilling thought but also oddly hopeful. Maybe the future isn’t about humans competing with AI — but about AI protecting humans as fiercely as a mother protects her child.
Key takeaways
- Embedding maternal instincts could be critical for AI safety — raw intelligence alone won’t keep us safe from powerful machines.
- Control-based approaches to AI risk are likely to fail when machines surpass human smarts; empathy and care need to be engineered.
- Despite geopolitical tensions, global collaboration is necessary to address AI’s existential risks effectively.
Reading between the lines of Hinton’s warning, it’s clear that artificial intelligence is heading toward a crossroads with humanity’s very survival at stake. The choice we face isn’t just technical — it’s profoundly ethical and emotional.
We must broaden the conversation beyond algorithms and compute power to ask how we can instill empathy, care, and responsibility deep within AI’s design. Because if we don’t, we might just find ourselves on the losing side of the equation.
It’s a heavy topic but an essential one for anyone who cares about the future of AI – and us.



