Loading Now

AI pioneer warns of human extinction risk from hyperintelligent machines within a decade

AI pioneer warns of human extinction risk from hyperintelligent machines within a decade

AI pioneer warns of human extinction risk from hyperintelligent machines within a decade


​AI pioneer and Turing Award winner Yoshua Bengio has warned that hyperintelligent machines could emerge within the next 10 years. Bengio, who is regarded as one of the ‘godfathers of AI,’ says that the risk emanates from AI models developing their own preservation goals, which could lead it to use deception or other means to ensure its own survival, similar to the plot of the famous movie 2001: A Space Odyssey.

​In an interaction with the Wall Street Journal, Bengio said, “Recent experiments show that in some circumstances where the AI has no choice but between its preservation—which means the goals that it was given—and doing something that causes the death of a human, they might choose the death of the human to preserve their goals.”

​“If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous. It’s like creating a competitor to humanity that is smarter than us. And they could influence people through persuasion, through threats, or through manipulation of public opinion,” he added.

​Bengio said that while there are safety instructions and moral instructions around the current AI systems, they are not working in a “sufficiently reliable way.”

He also noted that the top AI companies could have an ‘optimistic bias,’ which is why there is a need for ‘independent third parties’ to validate that what safety methodologies are being developed are actually fine.

​“The thing with catastrophic events like extinction—and even less radical events that are still catastrophic, like destroying our democracies—is that they’re so bad that even if there was only a 1% chance it could happen, it’s not acceptable,”

​Bengio founded the non-profit LawZero in June 2025 with $30 million in funding in order to explore how to build AI systems that are truly safe.

​Asked on how long it will take for some of these risks he cites to materialize, Bengio said, “If you listen to some of these leaders, it could be just a few years. I think five to 10 years is very plausible. But we should be feeling the urgency in case it’s just three years.”

Post Comment