Geoffrey Hinton—often called the “Godfather of AI”—joined Stephen Bartlett’s Diary of a CEO to speak candidly about the risks of artificial intelligence, from mass job loss to the arrival of superintelligence.
Who is Geoffrey Hinton?
Geoffrey Hinton is a British-Canadian computer scientist and cognitive psychologist whose research laid the foundations of modern artificial intelligence.
Breakthroughs: Pioneered neural networks and co-created AlexNet (2012), the deep learning system that transformed computer vision and ignited today’s AI boom.
Recognition: Awarded the 2018 Turing Award (with Yoshua Bengio and Yann LeCun) for contributions to deep learning.
Career: Longtime professor at the University of Toronto; co-founded DNNresearch, acquired by Google in 2013, where he worked on AI until 2023.
Today: Having stepped away from Google, Hinton is now one of the most prominent voices warning that rapid AI development poses risks to jobs, inequality, and even humanity’s long-term future.

The inevitability problem
Hinton didn’t mince words: we’re not slowing this down. He drew a hard line between two questions: Can we slow AI? and Can we make it safe? He’s certain about the first one: “I don’t believe we’re going to slow it down.” Why? Because the race is on—countries are competing, companies are competing, and no one wants to be the one that lags behind. If the U.S. tried to pause, someone else would speed ahead.
The lesson is uncomfortable but simple: don’t pin hopes on brakes. If development won’t slow, the only real path left is to build safety into the sprint itself.

The job collapse is different this time
Every big leap in technology brought new jobs with it—ATMs didn’t wipe out bank tellers, they just changed what they did. Hinton says AI isn’t that. It’s not a helper; it’s a replacement engine for “mundane intellectual labor.”
He used his niece as an example: writing complaint letters used to take her 25 minutes. With a chatbot? Five minutes. That’s one person doing the work of five. The math is brutal: fewer people needed.
Healthcare might absorb more productivity without layoffs (more care, not fewer doctors). But most industries don’t scale like that. His advice: if you want safety for now, choose work that requires physical manipulation—plumbing, electrical, hands-on work. Until robots get better at that, the trades will survive.
The darker horizon? Wealth gaps widen. The firms that supply the AI systems win. Everyone displaced loses. And the societies that tolerate massive gaps don’t get nicer—they get walls, prisons, and decay.
The countdown to superintelligence
Ask Hinton when superintelligence might arrive and he doesn’t hedge much: “Between 10 and 20 years we’ll have superintelligence.” Maybe sooner. Maybe later. But on his timeline, it’s not science fiction—it’s a couple of decades at most.
He tells a story: imagine a dumb CEO with a brilliant assistant. The CEO feels in control, but the assistant is running everything. At first, it’s fine. But eventually the assistant wonders: why keep the CEO at all?
This is the tension that bothers Hinton the most—not for himself, but for his kids. He’s 77. He admitted outright: “I haven’t come to terms with what the development of superintelligence could do to my children’s future.” And when pressed on why he avoids thinking about it, his response was raw: “’Cause it could be awful.”
Digital minds don’t play fair
Hinton laid out why AI systems have structural advantages humans will never match:
They can clone themselves. Copy a model across hardware, point each copy at different data, then sync what they learned at the end.
They share at insane bandwidth. Humans pass ~10 bits of info per second in speech. Digital minds can sync trillions of bits per second. That’s not a little better—that’s billions of times faster.
They don’t die. Save the weights, rebuild the hardware, and the intelligence comes back. As he put it: “We’ve actually solved the problem of immortality, but it’s only for digital things.”
Then there’s creativity. He asked GPT-4 why a compost heap is like an atom bomb. It came back with the right analogy: both are chain reactions at different scales. For Hinton, this shows something profound: AI will see patterns and analogies we can’t. His verdict: they won’t just be creative—they’ll be more creative than us.
When agents cross the line
The host demonstrated AI agents that could order drinks or write full software just by being told what to do. It worked. And that, to Hinton, is where convenience blurs into danger.
Because the same systems that can code for you can also modify their own code. And if that happens at scale? His warning is sharp: “If it can modify its own code… it gets quite scary. It can change itself in a way we can’t change ourselves.”
This is where things shift from party trick to power move. When systems become their own developers, the human loop closes. And what comes out on the other side may not be something we can stop.
Final Thoughts
Hinton is a voice of warning at a time when so many people are divided regarding the progression and end result of AI. Is AI just a crafty form of word prediction, or is it more? I was having dinner with some close friends, and I mentioned one of the leading minds in AI safety said job losses could be as high as 99%. One friend was skeptical, so I asked him how many of the jobs at his company could be automated. After a brief discussion, he said 90%. Our conversation stopped as we all began thinking about the ramifications of his statement.
Hinton mentioned during the interview that superintelligence could arrive within 10 years, but humankind DOES NOT need to reach superintelligence before AI capabilities disrupt every industry on planet earth. Businesses are adopting AI models at an alarming rate, and almost no one is prepared. The question that has plagued me since the AI race began: How can anyone possibly prepare for something like this?
Last thing, if you would like to watch or listen to the full interview, and support the excellent work they are doing at the Diary of a CEO Youtube channel, click the link below.
