So far in the Ukrainian conflict the world has seen a fully autonomous weapon, according to Justin Bronk, a senior research fellow in air power and technology at the Royal United Services Institute (RUSI).
He says the Brimstone missile, made by the Franco-British company MBDA, moves closer to true independence from humans, paving the way for weapons of the future.
Describing it as a “fire and forget” missile, Bronk says, the Brimstone One, the first version of the weapon, which experts believe has been supplied to Ukraine by the UK, “could go And can find and kill tanks in the air or other vehicles” target grid “assigned by its operators. This means the soldiers tell Brimstone an area they want the enemy tanks to find and destroy, which the missile then does.
Bronk says, “Not only do they go and find and kill tanks in the target grid….”
Professor Payne of King’s College explains that autonomy in weapon systems isn’t just about zooming in on robots making decisions on the battlefield: “It’s using AI or machine learning to step into what’s gruesomely known as the kill chain.” is called. This is the use of AI to detect a signature from a target, then use the AI to autonomously analyze that signature to determine what is in range. ,
That kind of analytical capability is deployed in drones that sit atop the battlefield and wait for the target of opportunity – the battleship.
These types of technologies deployed by the Russians are not working as well as they should, according to RUSI’s Bronk: “They seem to be very unsuccessful. Many of them are falling from the sky without exploding.”
Dr. Christian Gustafsson of Brunel University’s Department of Intelligence and Security Studies says Russia has been working on AI-guided missiles since 2017 if it decides to turn the target mid-flight, but states that This reflects the technology present only in Western weapons. ,
One area in which AI is making leaps and bounds, and where experts think the greatest risk exists, is in the decision-making tools built for military commanders.
These systems analyze battlefield intelligence, whether from pictures, videos, social media feeds or written reports filed by soldiers, and help with demonstration.
Primitive forms of autonomous military decision making have existed for decades.
For example, Bronk describes Russia’s Soviet-era Dead Hand Nuclear Command System as being capable of launching nuclear missiles without human input if its sensors detect a hostile nuclear missile. The idea of allowing strike-back capability in case Russia’s leadership is wiped out. As Bronk says, Dead Hands are “a really cool way to unintentionally blow up the world.”
“People naturally trust machines,” he continued, highlighting one of the ethical questions holding back weapons designers and military artificial intelligence engineers.
“If the AI says ‘many of you will die if you do this, but the end result will be fewer casualties’ … If you follow that advice, are you liable for deaths that are directly yours? Was it because of the actions?”
experts say Current campaign for more autonomous battlefield technology Smart missiles, drones, and bombs are needed to do something predictable when they lose communication with their human controllers.
“War is, always has been, a great accelerator of technological development,” says Brunel’s Dr. Gustafsson. Without increasingly sophisticated targeting technology on board, he argues, AI-powered bombs and missiles could end up striking Allied forces on the battlefield—something no Western military commander is going to do for a long time.
“Not really the Terminator T-1000, but more of a blue-on-blue escape,” he says, using the military term for friendly fire.
In the West, it doesn’t look like we’ll see truly autonomous weapons in the near future. As for the increasing reliance on AI in weapon systems and advances being made in computer science, there are a mix of practical, practical and ethical reasons why killer robots won’t take over the world any time soon.