AI could provide the US with a deadly edge in the Iran war, but it also carries risks. Science, climate and technology news
Forget science fiction. The era of AI in warfare has arrived.
Israel uses AI systems in Gaza. These systems are capable of identifying potential targets and aiding in the prioritisation of operations.
The United States military reportedly used Anthropic’s model, Cloud, which is an AI system designed for various applications, during its operation to kidnap Nicolás Maduro from Venezuela.
Even after this incident, Anthropic encountered difficulties with the US administration over the appropriate use of AI in warfare.
while the US military clearly used the cloud in its attack on Iran.
Iran latest: Trump criticizes Starmer over UK stance
Experts suggest that AI-powered systems could potentially target the missiles currently flying over Tehran.
Craig Jones, senior lecturer in political geography at Newcastle University, says, “AI is changing the nature of modern warfare in the 21st century. Its impact is difficult to underestimate.”
“This is a potentially catastrophic scenario
Terrible or not, it seems there is no going back. If you want to understand how much importance the US military places on AI, a good place to start is a memo sent to all senior military leaders earlier this year by Defence Secretary Pete Hegseth, who calls himself Secretary of War.
“I direct the War Department to accelerate America’s military AI dominance by becoming an “AI-first” fighting force across all components from front to rear,” Mr. Hegseth wrote.
This is not an experiment; it is a mandate – to adopt AI quickly and at scale.
Or as Hegseth says, “Speed wins.”
Yet the scenario in question is not the one that first comes to mind.
Yes, autonomy is increasing in some areas. In Ukraine, for example, there are drones that are capable of continuing missions even after losing contact with the human operator.
But we are not at the stage of autonomous killer robots stalking across the battlefield.
“We are not in the Terminator era yet,” says David Leslie, a professor of ethics, technology, and society at Queen Mary University in London.
The systems into which AI is being embedded—known in military jargon as “decision support systems”—are advisors who flag targets, rank threats, and suggest priorities.
AI systems can take satellite imagery, intercepted communications, logistics data, and social media streams—thousands or even hundreds of thousands of inputs— and pull together surface patterns far faster than any human team.
The concept is that they assist in piercing the murkiness of war, enabling commanders to concentrate resources on the most crucial areas, potentially outperforming fatigued, overburdened, and stressed human soldiers in accuracy.
This means they are not just a tool, says Dr Jones, but a new way of making decisions.
“AI, as we see it in our lives, is like an infrastructure,” he says. “It’s built into the system.”
“We can aggregate that surveillance that we’ve been doing for a few years.
“But now AI gives us the stability to act on that, to kill the leader of Iran, to take out serious adversaries and enemies, and to find them in ways that might not have been possible before.”
‘A very motivating tool’
Professor Leslie agrees that the new systems are extremely capable from a military perspective.
“The race to speed is driving this acceleration,” he says. “Speeding up decision-making cycles gives the military a lethality advantage.”
One crucial aspect of decision support systems is their ability to function without the need for human intervention. A human being does. This fact has been a central assurance in the debate about military AI. A human is always present in the process.
As OpenAI, the company that makes ChatGPT, announced a partnership to supply AI to the Pentagon, it stated: “We have approved forward-deployed OpenAI engineers to help the government, as well as security and alignment researchers in the loop.”
OpenAI has also stressed that it has reached an agreement with the Pentagon that its technology will not be used in ways that cross three “red lines”: mass domestic surveillance, direct autonomous weapons systems, and high-risk automated decisions.
But even with a human in the loop, a question remains.
Read more:
A study finds AI willing to “go nuclear” in manoeuvres.
Cloud Opus 4.6: This AI just passed the ‘vending machine test’
When you’re fighting a war, can humans really scrutinise every decision the AI makes? In situations where time is limited and information is incomplete, how should we interpret “human observation”?
Dr Jones asserts that humans are technologically informed.
“In my opinion, that doesn’t mean they are empowered to make effective decisions and have enough information to actually monitor what happened. AI… is a very persuasive tool for people making decisions.”
Or, as Professor Leslie puts it, “We’re really facing a potentially huge danger of rubber stamping, where, because of the speed, you don’t have active, significant human participation to assess the recommendations that are being given by these systems.”
And then there’s the question of AI’s own mistakes.
Read more:
UK to deploy HMS Dragon to Cyprus, PM confirms
Iran Q&A: Why Trump may try to declare quick victory
Testing by Sky News found that neither Claude nor ChatGPT could tell how many legs a chicken had if the chicken did not look as expected.
Furthermore, the AI insisted that it was right, even when it was clearly wrong.
The example comes from a paper showing dozens of examples of similar failures. “This is not the only example of animal legs,” said lead author Anh Vo.
“The problem is common across all types of data and functions,” Vo said.
This is because AI doesn’t actually see the world in a human sense—it just guesses what’s most likely to happen based on past data.
Most of the time, that kind of statistical reasoning is surprisingly effective. The world is so predictable that probabilities work out.
But some environments are, by their very nature, unpredictable and high-risk.
We are testing the limits of this technology in more adverse conditions than we could ever imagine.



