As someone immersed in the ways technology reshapes business and society, every now and then, I come across an idea that rewires how I think about the future. That’s exactly what happened during my conversation with Richard Susskind, renowned AI expert and author of How to Think About AI: A Guide for the Perplexed. His perspective reframes not just what AI is, but how it might redefine life as we know it.
AI: Humanity’s Greatest Opportunity and Its Most Complex Threat
Susskind calls AI “the defining challenge of our age,” and it’s not hard to see why. He argues we need to hold two opposing ideas at once: that AI could offer extraordinary progress for civilization, and at the same time, in the wrong hands or used recklessly, it poses serious risks to our existence. He told me.
“This isn’t just about making life more convenient. It’s about whether this technology helps humanity thrive or leads us to disaster.”
Read More: Stop Falling for These 5 Common Myths About AI Agents
How We Think About AI Is the Problem
A powerful takeaway from our talk is how flawed our current thinking frameworks are. Susskind explains that people often fall into one of two camps:
-
“Process thinkers” focus on how AI works and what it lacks (creativity, empathy, human judgment).
-
“Outcome thinkers” care about results whether machines can perform tasks as well or better than humans, even if they do it differently.
“We’re not trying to make AI think like people,” Susskind said.
“We’re not trying to make AI think like people,” We’re building systems that produce better outcomes, sometimes using methods we barely understand ourselves.”
We’re Missing the Words to Describe What’s Coming
Susskind believes our language hasn’t caught up with what AI is becoming. He compares today’s world to a time before words like “capitalism” or “factory” even existed. He said,
“Machines might not be ‘creative’ in the human sense,” but they can generate completely novel ideas, designs, and expressions.”
What’s more, people are already developing emotional habits toward machines saying please, thank you, and even apologizing to them. These interactions hint at new forms of relationships for which we currently have no vocabulary.
Automation Isn’t Enough We Need to Think Bigger
One of Susskind’s most important insights is that just bolting AI onto existing systems isn’t real progress. He breaks AI integration into three levels:
-
Automation: Improving current processes
-
Innovation: Doing what was previously impossible
-
Elimination: Removing the need for certain services altogether
Most organizations are stuck in the automation mindset, trying to do what they already do, just faster. But that misses the big picture.
To make the point, Susskind told a room full of neurosurgeons:
“Patients don’t want neurosurgeons they want health.” That earned some gasps. But his point was clear: AI could deliver preventative care, detect diseases before symptoms appear, and eliminate the need for interventions. He said.
“People want a fence at the top of the cliff, not an ambulance at the bottom,”
Read More: Even AI Engineers Don’t Fully Understand How It Works
We Face a Mountain Range of AI Risks
Despite his optimism, Susskind doesn’t gloss over the dangers. He categorizes them as a “mountain range of threats,” including:
-
Existential risks—threats to humanity’s survival
-
Catastrophic risks—widespread societal or environmental harm
-
Socioeconomic risks, Such as massive job displacement
-
The risk of inaction—failing to use AI where it could help humanity
He asks hard questions:
“If AI systems can do everything humans can what becomes of work, income, and purpose?” And what happens when all the power, chips, and data sit in the hands of a few tech giants?
That’s not just a tech issue it’s a political and philosophical crisis.
We Need More Than Coders to Solve This
Susskind argues we need more than brilliant engineers. We need an “Apollo-scale mission” involving:
-
Sociologists
-
Lawmakers
-
Business leaders
-
Ethicists and policymakers
“We must bring in the world’s best minds,” he said.
“Not just to build AI, but to guide it.”
AI Progress Is Exploding—Faster Than You Think
What’s truly staggering is how fast AI is moving. “Back in the ’60s and ’70s,” Susskind said, “major AI breakthroughs happened every five to ten years. Now? It’s every six to twelve months.”
And it’s not just ideas it’s raw power. The computing resources for training AI are doubling every six months, meaning we could see a billion-fold increase in power over the next decade.
He believes Artificial General Intelligence (AGI) could plausibly emerge between 2030 and 2035 and says we should be preparing as if that’s the case because the consequences of not doing so are too great.
A Cosmic Shift May Be Underway
Finally, Susskind offered a truly mind-expanding thought: “What if our only lasting contribution as a species is creating the next form of intelligence?”
He refers to this as the “AI evolution hypothesis” the idea that in the grand sweep of the cosmos, humans may simply be a bridge to a more advanced form of intelligence that spreads across the universe.
That might sound like science fiction, but it underscores just how monumental this moment in history could be.
Final Thought: This Is Bigger Than Tech
AI isn’t just another trend or gadget. It’s something that could reshape our economy, our institutions, our relationships, and even our understanding of what it means to be human.
Whether it becomes a tool of prosperity or a force of chaos will depend entirely on how thoughtfully we prepare for what’s coming.
The question is no longer if AI will transform the world, but whether we’ll be wise enough to guide it.