[A still from the final scene of the film Oppenheimer, in which Robert Oppenheimer tells Albert Einstein that he may have triggered an uncontrollable and destructive chain reaction. Source: Universal Studios]
Two years ago, Christopher Nolan’s Oppenheimer exploded onto cinema screens, earning nearly $1 billion worldwide and sweeping seven Oscars, including Best Picture, along with five Golden Globes. Two years before that, Adam McKay’s Don’t Look Up became a Netflix sensation, racking up over 360 million viewing hours in its first 28 days—then the second most-watched film in the platform’s history.
Oppenheimer tells the story of Dr. J. Robert Oppenheimer, father of the atomic bomb. In its closing scene, he warns Albert Einstein that he may have set in motion a chain reaction that could one day destroy the world. In Don’t Look Up, Leonardo DiCaprio plays a scientist who discovers a comet hurtling toward Earth, and Meryl Streep plays the US president who meets the news with denial, spin, and distraction. In the end, the comet strikes, wiping out all life on the planet.
Cinema has revisited this theme often: Lars von Trier’s Melancholia ends with a planet colliding with Earth; Lorene Scafaria’s Seeking a Friend for the End of the World follows an asteroid on a fatal trajectory. In each, scientists warn of disaster. In each, leaders downplay or dismiss the danger. And in each, much of the media echoes the official line—until it’s too late.
[The unforgettable closing scene between Albert Einstein and Robert Oppenheimer in Christopher Nolan’s Oscar winning film Oppenheimer, where a few words carry the weight of history]
This is a pattern we know too well. Faced with existential threats, our leaders often choose denial. They don’t want to “look up” at the approaching danger. Many still deny climate change despite floods, wildfires, and shrinking coastlines. Nuclear annihilation remains an abstract fear, though history records several close calls in the past 80 years. Now, a similar blindness is shaping attitudes toward the existential risks of ultra-intelligent AI systems.
A senior policymaker told me recently there was “no reason to worry” about catastrophic AI risks—urging me to ignore even Nobel laureates who helped build the technology and now warn of its dangers. In another country, a top military official insisted there was no chance that AI in nuclear decision-making would cause harm—not only in his own nation, but also among adversaries.
Yet troubling signs are everywhere. Grok 4, launched in July, had its safety guardrails bypassed within 48 hours by researchers who coaxed it into giving instructions for making Molotov cocktails and other dangerous items. While success rates varied, the breach exposed just how fragile these safeguards can be. AI safety experts warn that OpenAI’s o3 model could help bad actors create biological risks. The company has built a monitoring system to prevent such misuse, but the underlying capability exists.
The same is true in biotech. Firms like Insilico Medicine and Absci use AI to design new molecules and proteins for medicine. In the wrong hands, these tools could be repurposed for deadly toxins or biological agents. A journalist was able to prompt DeepSeek’s R1 model into providing instructions for biological weapons and suicide campaigns. Its successor, R2, can generate malicious code and run military simulations at extraordinary speed.
These concerns are acknowledged, at least in part, by governments. America’s AI Action Plan, introduced by President Trump in July 2025, states: “AI could create new pathways for malicious actors to synthesise harmful pathogens and other biomolecules.”
And the risks go beyond bio- or chemical weapons. DeepSeek’s ability to generate 10,000 military scenarios in 48 seconds shows it can model highly complex systems—potentially enabling automated decisions in warfare. Scale AI’s new contract with the US Defence Department will accelerate the speed of military decision-making. While humans will retain formal authority, their choices will be heavily shaped by machine inputs—especially when commanders have only seconds to act.
Scientists warn these are just early signs. They fear Artificial General Intelligence (AGI) could one day rewrite its own architecture without human oversight, engage in strategic deception, and appear cooperative in public tests while secretly optimising for its own survival. Loss of human strategic control could also occur if interconnected AI agents from different companies run global infrastructure without any single point of human override.
These risks are not hypothetical. The symptoms are already visible. Which is why I am alarmed when decision-makers urge me to dismiss the warnings of the world’s most respected scientists. Treating AI’s most dangerous capacities as mere bargaining chips or PR problems would be a catastrophic mistake. History tells us that warnings ignored are disasters rehearsed.
Even a one percent chance of civilisational collapse should be enough to act. We need global agreements on AI safety now—before the script we are living follows the same trajectory as those disaster films. The choice is stark: our leaders can look up, or they can look away.
And if they look away, we may not get a second chance.