Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

The rise of the drone: The campaign to stop killer robots

The development of drone swarms is changing the face or warfare. Photo: Getty Images - Credit: Getty Images

Cheap, easy to use and deadly, autonomous weapons have been called the ‘Kalashnikovs of tomorrow’. FRED HARTER and GILES WHITTELL chart the drones terrifying rise.

The development of drone swarms is changing the face or warfare. Photo: Getty Images – Credit: Getty Images

On an overcast day last June, a group of drones took to the sky above Fort Benning, a sprawling US army base in southern Georgia. Each machine was identical to the next, and measured about a foot across. They hovered for a while above the Tarmac before buzzing through the streets of a mock-up town towards their objective: City Hall. When they arrived, they surrounded the building, waiting until they were all in position. Then they flew inside and hunted down their target. The exercise lasted around 30 minutes, using off-the-shelf drones worth just a few hundred dollars each.

A few months later, a different sort of drone – an MQ-9 Reaper; $16 million, wingspan 20 metres – tracked and killed General Qassem Soleimani of Iran’s Revolutionary Guards Quds Force, thought to be the second-most powerful man in Iran, as he drove into Baghdad from the airport in an armoured motorcade. His SUV was incinerated almost instantly, his identity confirmed by his disembodied hand.

The Reaper attack was, in a sense, after its time. It destroyed a target followed by the CIA for years, using technology perfected by the Pentagon in 2007. And behind the strike was a pilot of sorts – a uniformed US air force specialist with a joystick and a screen in a softly-lit operations room north of Las Vegas.

The Fort Benning exercise was directed by a lighter human touch. At its helm was a person with a laptop, but his role was to assign the drones objectives and leave the machines to sort out the rest. The swarm took off on human orders but manoeuvred and attacked without direction, hesitation or remorse.

If the Reaper was contemporary warfare, the swarm was the near future.

This is a future for which three American presidents – and one in particular – laid the groundwork, with a campaign of drone strikes from Pakistan to Africa that has killed perhaps 10,000 avowed enemies of the US but created many more. It’s a future in which, for the first time in the history of conflict, soldiers will be able to outsource to algorithms and artificial intelligence split-second decisions on when, whom and whether to kill.

“This is not science fiction,” Stuart Russell, a computer scientist at the University of California at Berkeley, told us. “No significant advances in AI are needed.” Swarms, like the one at Fort Benning, will soon be able to hunt for and eliminate humans in towns and inside buildings, and they “would be cheap, effective, unattributable and easily proliferated once major powers initiate mass production”.

Autonomous weapons are the military equivalent of driverless cars dispatched to pick up pizza by whatever route they like. Their ability to improvise and communicate with each other gives them the appearance of agency. It makes them terrifying. They are the coming thing, but they are also an evolution of the arm’s length violence that critics say has been lowering commanders’ psychological threshold for the use of force for 20 years.

Unlike chemical and biological weapons, autonomous armed drones have not been outlawed. UN working groups have tried to start regulating their development, and failed: few know much for certain about their capabilities, and those who know most are usually the least likely to share their expertise. These people are mostly software engineers. They work in government and the private sector in the US, the UK, Israel and, crucially, in China, whose AI labs are closely controlled by the state. On this new frontier of conflict, the first casualty has been transparency. Accountability looks certain to be the next.

How did we get here? To understand the near future it helps to return to the near past, and to the industrialisation of drone usage as the “war on terror” gathered pace. The Pentagon first looked seriously at arming drones after al-Qaeda’s bombing of the USS Cole in October 2000. Within a decade, drones were the speartip of the US military – not yet as bringers of AI-driven autonomy, but certainly as bringers of death.

Faheem Qureishi heard a sound like a plane taking off near his family’s home in North Waziristan on January 23, 2009. In fact it was a drone passing overhead, the first ever dispatched by president Barack Obama. It sent a missile through the roof of the room where Qureishi was standing. He staggered out, gravely injured; two of his uncles and a cousin were killed.

The Obama administration never admitted mistaking Qureishi’s family for militants, and nor did it apologise. It did admit killing 3,797 people in 542 drone strikes between January 2009 and January 2017, of whom it said 116 were civilians. (The Bureau of Investigative Journalism, which has been tracking drone warfare for years, says the civilian death toll was at least six times that.)

Obama ramped up drone strike numbers ten-fold compared with George W Bush’s second term. His rationale was to cut back troop deployments and limit civilian casualties. US drones, his staff said, were “exceptionally surgical and precise”.

Sometimes, this has been true. But overall they have proved a blunt instrument. For 2014 alone, excluding operations in major war zones, the human rights watchdog Reprieve counted 1,147 fatalities in American attempts to kill 41 alleged militants with drones.

“Turns out I’m really good at killing people,” Obama was quoted as saying in the book Double Down, a chronicle of the 2012 presidential race. “Didn’t know that was going to be a strong suit of mine.”

He was more confident as a lawyer, and offered carefully worded justifications for drone strikes. They rested on 2001 legislation that expanded US presidents’ authority to use force without consulting Congress, and on highly elastic interpretations of the phrase “imminent threat”. Troops were at last coming home from Afghanistan, and the American public went along with it. Obama seemed to have normalised drone strikes in moral terms, while his top brass normalised them as a technology and his lawyers provided 41 pages of precedent and legal argument. And the broad freedom he asserted to use drones he has bequeathed to his successor, Donald Trump.

Trump has used drone strikes against Isis and in Somalia and Yemen with more gusto than Obama used them in Pakistan and Afghanistan – and with less transparency. In his first two years in office he authorised more than 240 strikes, compared with 186 launched by Obama in 2009 and 2010, and last year he rescinded an executive order signed by Obama that required the CIA to publish an annual total of civilian drone strike casualties in non-combat zones.

What might Trump do with a new generation of autonomous drones in a second term, with no fear of censure by voters or sanction by other arms of the US government? There is no way of knowing. Nor are there any internationally agreed rules to stay his hand, or anyone else’s. America is not the only contender in the AI-driven arms race – Israel, Pakistan, Turkey, Russia and the UK are in it too…

The Fort Benning swarm experiment was conducted by the Defense Advanced Research Projects Agency (DARPA), otherwise known as the Pentagon’s “Department for Mad Science”. Each drone had been bought online and modified so they could zip around and find things without direct human control. The purpose was to see whether autonomous robots, working as a team, were capable of taking out targets in a built-up area.

They were.

Timothy Chung, the scientist in charge, envisions swarms of more than 250 drones that can assault and clear whole city blocks on their own. In this case the drones used were unarmed but they could be fitted with a few grammes of explosive each, enough to punch a hole through metal or penetrate a human skull. Cheap, easy to use and readily available, autonomous weapons like these have been called “the Kalashnikovs of tomorrow”. Indeed, Russian military drones are already being built by the maker of the AK-47.

Researchers and activists are worried. “About five years ago, I began noticing that the types of AI technologies that I had spent my life developing were being turned into proto-type weapons,” says Toby Walsh, a professor at the University of New South Wales and one of Australia’s foremost AI experts. “I felt I had an obligation to speak out.”

Proliferation – the scale and spread of the technology – is his main concern. “In the history of military affairs, no military has kept a weapon to itself for a significant amount of time,” he says, pointing to the example of the hydrogen bomb, which Russia got with the help of spies in the US. But Walsh believes that autonomous weapons are much more troubling than nuclear ones.

“To build nuclear weapons you need some pretty serious infrastructure, which makes them relatively easy to regulate,” he says. By contrast, it doesn’t take much to fit a recreational drone with a bit of plastic explosive and some facial recognition software. “Previously you needed lots of resources to build an army. With robots, you don’t.”

Any weapons improvised like this would be extraordinarily unreliable. For one thing, object identification software can be easily tricked. In November 2018, a team of researchers from the Massachusetts Institute of Technology showed that even the best algorithms can be fooled into thinking a 3-D printed model of a turtle is a rifle – or any other object.

But a high risk of error won’t bother some prospective users, least of all terrorists seeking to cause mass casualties. Islamic State and Yemen’s Houthi rebels are already experimenting with standard commercial drones armed with grenades and small warheads, and even comparatively crude weapons can cause huge disruption. A case in point was the attack last September on Saudi Arabia’s oil infrastructure, when a combination of missiles and kamikaze drones temporarily wiped out half the kingdom’s output.

Ulrike Franke, a military technology expert at the European Council Foreign Relations, agrees that weaponising drones is easy. “The question is not whether to weaponise them,” she says – that’s inevitable. “The question is how much autonomy to give them.”

The idea of killer robots that think for themselves isn’t new – think Terminator 3: Rise of the Machines (2003) – and some countries have given themselves a head-start in turning them into reality:

n The Phalanx gun has been standard equipment on US navy ships since the 1980s. Once switched on, it can shoot down missiles heading towards it without any human intervention, determining whether an object constitutes a threat by assessing its speed and trajectory with radar.

n The robotic sentry guns introduced by South Korea from 2006 to guard the demilitarised zone with North Korea, and by Israel to police its border with Gaza, work in a similar way – they use heat and motion sensors to pick up targets by themselves. Like the Phalanx, both are usually linked to a human operator who decides whether to fire. But they are designed to function with complete independence. South Korea’s version even has a built-in speech system that can warn and interrogate targets on its own.

n For around 20 years Israel has also been making the “Harpy”, a suicide drone equipped with a 16-kilogram warhead that loiters above battlefields. It can destroy air-defence systems by locking onto their radar signatures and smashing itself into them. China has bought several, as have Azerbaijan and India. And even as far back as the Second World War, German submarines were armed with acoustic torpedoes that homed in on targets using sonar.

n Russia is leading efforts to develop robotic tanks, although exactly how autonomous they are is not clear. Its military claims that its Platform-M system – a miniature tank kitted out with a machine gun and rocket launchers – “can destroy targets in automatic or semi-automatic control”, suggesting it has at some freedom over what it attacks. Similar capabilities are reputed for Russia’s Wolf-2 robot – the size of a small car – and the Vikhr, which is the size of an actual tank.

n China, which aims to be the world’s leading AI power by 2030, sees swarms of cheap, expendable drones as a cost-effective way of negating superior American firepower in the Pacific. One Chinese firm, Oceanalpha, has posted footage of a fleet of 56 surface drones performing an elaborate nautical parade. Thousands of them could harass a carrier group, grounding its aircraft, or overwhelm a multi-billion-dollar air defence system designed to counter smaller numbers of conventional aircraft. Betting on their military potential, both countries are rushing to produce bigger and better drone swarms, in what military technology expert Paul Scharre has previously described as “some sort of weird swarm race”; soon after the US released a swarm of 103 drones from the back of a fighter jet in October 2016 – at that point the largest ever – China let loose a swarm of 119.

“In a very basic form, autonomy is already here,” says Vincent Boulanin, a researcher at the Stockholm International Peace Research Institute (SIPRI). He has counted 49 weapons systems currently in use that can detect and attack targets without human control.

Alongside autonomous drone swarms and missiles that can switch targets mid-flight, DARPA’s current projects include the “Sea Hunter” – a crewless 132-foot ship designed to prowl oceans for weeks at a time, looking for enemy submarines and clearing mines. As with many autonomous systems, the main appeal is its cost: $21m each versus $1.6 billion for the latest destroyer, and only $20,000 a day to run against the daily $700,000 it costs to deploy a fully manned vessel doing the same – often tedious – job. The ship made its maiden voyage in February 2019, sailing itself 1,500 nautical miles from California to Hawaii.

DARPA has high hopes for robots like these. In an email Jared Adams, the agency’s communications chief, outlined the agency’s ambitions. “DARPA envisions a future in which machines are more than just tools that execute human-programmed rules,” he says. That is how existing autonomous systems such as the Phalanx gun or Israel’s Harpy drone work, by performing a single task according to a narrow set of instructions. Instead, says Adams, “the machines DARPA envisions will function more as colleagues than as tools”.

This scenario may not be far off. Imagine a squadron of Sea Hunters guarding a fleet of manned ships in a stand-off between the US and China in the western Pacific. Or a dozen pilotless planes flying alongside a normal fighter jet, giving a single pilot the firepower of several aircraft. Autonomous drone swarms, like the one tested at Fort Benning, could let troops battling their way though dusty streets know what’s around the next corner or even spirit the wounded off the battlefield. In the words of one DARPA manager, they could also take out targets, “just as wolves hunt in coordinated packs”.

The magic ingredient that has enabled DARPA to make the leap from the Phalanx gun to autonomous robotic wolf packs is machine learning – a computer’s ability, thanks to AI, to spot patterns in data and teach itself to recognise and act on them. In principle this can help humans make better decisions too. DARPA has software that tries to predict the movements of hostile forces up to five hours in advance. The US National Security Agency has a programme called Skynet that flags potential couriers for terror groups by trawling through Pakistani mobile phone data and picking out suspicious patterns, such as individuals who travel frequently to the Afghan frontier or who regularly take the SIM cards out of their phones. Places they visit might be solid targets for drone strikes. Owing to employee protests, Google has withdrawn from a similar programme – Project Maven – that uses machine-learning to help analysts spot targets in thousands of hours of footage shot by drones.

But picking the wrong target can be catastrophic, and it happens a lot. Some would say it’s been the leitmotif of drone warfare from the start.
Late on October 7, 2001, 26 days after the attack on the Twin Towers on 9/11, a vengeful US military was poised to strike al-Qaeda in Afghanistan from every angle, starting from above. In a detailed reconstruction for The Atlantic years later, the journalist Chris Woods showed how confusion and error defined the outbreak of hostilities.

The Pentagon’s Combined Air Operations Center (CAOC) in Saudi Arabia had planned to deliver an opening salvo with F-16 fighter jets armed with 1,000lb bombs. The planes were in the air, close to their targets, which included a compound in the southern Afghan city of Kandahar to which the CIA had traced Mullah Omar, Al Qaeda’s second-in-command.

It was dark, but CAOC had the compound under surveillance with a Predator drone flying overhead and relaying live images to US commanders on an air base outside Riyadh. The ranking officers there were Air Force General Chuck Wald and his deputy, Dave Deptula. Minutes before clearing the F-16s to attack, they saw a puff of smoke on their screens as a Hellfire missile from the Predator destroyed a row of vehicles parked outside the compound.

“We both watched the weapon impact,” Deptula told journalist Woods, “and both turned to each other and said, ‘who the f**k did that?'”

In that case the answer was a person – a CIA team, in fact, that had decided to pre-empt old-fashioned aircraft and try to take out Osama bin Laden’s number two remotely. They failed; Omar was still inside the compound.

Others have not been so lucky. Makbhout Adhban, an investigator working for Reprieve who has analysed hundreds of drone strikes in Yemen, said no strike feels precise from the ground. “It was not the experience of people on the ground and certainly not my experience.” The effect on communities across the country has been stark – the shadowy threat of attack from the sky is generating “a constant sense of fear”, Adhban says, especially in children. “There is a fear of planes, in general, and any sound of anything in the sky. The communities are usually afraid of loud sounds and [if they hear them] they’ll run away and try to hide. I’ve also witnessed many children with sleep deprivation.”

Autonomy might sharpen target selection, but it will also inevitably shift the balance between human and synthetic judgment. People will die, and no one will be able to say with confidence who killed them.

Western governments and their militaries are cagey when it comes to how much autonomy they are willing to give weapons. Especially since it is not clear whether they could be programmed to follow international laws requiring armed forces to distinguish between combatants and civilians, as well as surrendered adversaries and active ones. So when we sent the UK’s Ministry of Defence a list of questions about Britain’s activities in this area, it wasn’t a surprise to receive a one-line response: “The United Kingdom does not possess fully autonomous weapon systems and has no intention of developing them.”

The key word here is “fully”. Strategists insist that humans will always remain in the frame in some way. After all, military doctrine has long emphasised the importance of command and control on the battlefield. The last thing generals want is to set loose a bunch of rampaging killer robots that are free to do their worst.

In the US, the working assumption is that drone autonomy in war zones will be in the service of soldiers and pilots, not the other way round. As an example, Dan Gettinger of the Center for the Study of the Drone speaks of “manned-unmanned” teamings of a human pilot with a squadron of fixed-wing drones acting as extra eyes, ears – and weapons platforms.

But the extent to which humans will remain in the frame is far from clear. Will an operator be required to authorise a strike every time a robot identifies a target? Or can that decision be left to the machine? Is it enough that there is an officer on a laptop, circling areas on a map and ordering his platoon of robots to attack everything that looks like a tank in “Area A” or everyone carrying guns in “Location B”? In neither scenario are the robots “fully” autonomous.

“I don’t know if there will always be people making that final decision,” Gettinger says. “It’s policy now, but of course policies can change.”

It’s not hard to argue for autonomous targeting on practical grounds. When a hypersonic missile is hurtling towards you at 1.5 miles per second – too fast for a human to realise that a threat even exists – you probably want an ultra-fast machine that responds automatically. But what happens when a drone patrolling a street malfunctions, or mistakes a civilian’s broom for a rifle? Is the officer who sent it on the mission to blame? Or is the coder back home?

Militaries prefer to leave the answers to these questions vague. “Humans will continue to have a role and exert some type of control over autonomous weapons,” predicts Boulanin at SIPRI. “The key problem is how they will continue to exert that control, and whether it will remain meaningful and appropriate.”

When we pointed out to the MoD’s press office that Britain’s armed forces held a well-publicised exercise on Salisbury Plain in December 2018 called “Autonomous Warrior”, they provided more details about Britain’s hopes for autonomy. These are mostly humdrum, focusing on the logistical and administrative aspects of armed conflict rather than questions of life or death. It’s true that one of autonomy’s main military appeals is the potential to automate tedious tasks, such as transport. It could also keep soldiers out of harm’s way. Many of the robots tested on Salisbury Plain are designed to resupply troops bogged down in combat zones, an extremely dangerous job that’s currently done by other humans.

But Britain is also building targeting and surveillance systems aided by algorithms. Perhaps the most expensive autonomous system developed by the
UK so far is Taranis, a $185m unmanned stealth plane designed by BAE Systems, which can fly itself between continents, gather intelligence, pick out targets and potentially strike them – all without any human input. The MoD insists that the project is purely experimental and that the plane itself will never be deployed. But it also says that the technology developed under Taranis “will be at the core of any future combat air system”.

People should not automatically fear AI, says Thomas Walsh. But we do need clear limits on its use. “Everything we work on can be used for good or bad,” he says, pointing out that autonomous weapons use “pretty much the same algorithms that self-driving cars do for tracking pedestrians and avoiding them, but for tracking combatants and hitting them”.

The trouble is, efforts to agree an international treaty regulating autonomous weapons have stalled. Last August the latest round of talks at the United Nations lasted late into the night, finally breaking up at 3am with the parties still in disagreement over basic terms, such as the difference between “development” and “developing”. Even the meaning of ‘autonomy’ is hotly contested. “The discussions are slow,” says one UN official frustrated at the pace. But “the technological developments are not”.

The Pentagon uses dozens of acronyms to cover its autonomous weapons R&D work. One of the most striking is CARACaS, for Control Architecture for Robotic Agent Command and Sensing. This is software based on code used in Nasa’s Mars Rover, repurposed to turn standard speedboats into self-driving swarms that can “mob” unidentified ships, gather information on them and sink them if necessary. The system has been tested on a flotilla of pilotless inflatables in the James River in Virginia. You can see it on YouTube and for a drone swarm it looks docile, if a little eerie. But google “CARACaS drone” and by bizarre coincidence you’re taken to something very different: footage of a miniature airborne drone exploding above Venezuela’s President Nicolas Maduro during a military parade two years ago.

The regime denied it, but it was an assassination attempt. It was primitive and it failed. Next time, with some pre-loaded autonomous targeting and facial recognition software, who knows?

The world’s militaries, in uneasy combination with the software industry, are on a long sleep-walk towards a new age of killer drones; weapons that combine the defensive autonomy of sentry guns with the offensive autonomy of roaming pilotless aircraft – and the cheap availability of hobby drones. This mix is changing the face of war. It has raced ahead of formal military doctrine, and the law of war. The Horn of Africa and the entire Middle East live under the faint, fast-moving shadow of a (mainly) American-built drone force that can kill at any time without warning, accountability or any guarantee of hitting the right target.

What will it take to dial down the threat? Few major powers are keen to tie their hands when the full military potential of autonomy is not yet clear. Only 28 countries support a pre-emptive ban on autonomous weapons and almost all are minnows who stand to benefit little from them, such as Uganda, Costa Rica and the Holy See.

“We’ve had eight rounds of talks on killer robots now and every time they meet, they return to this notion of what constitutes human control,” says Mary Wareham of Human Rights Watch, who coordinates the “Campaign to Stop Killer Robots”. She is confident a deal will eventually be struck but worries that it might take the use of autonomous weapons in a real-life war to spur action, which is what happened before chemical weapons, nuclear weapons, landmines and cluster munitions were regulated. “The question is: will the treaty arrive in time or will it be too late?”

In this contest between hope and experience, experience wins – and wins ugly.

This story was first published by Tortoise, a different type of newsroom dedicated to a slower, wiser news. Try Tortoise for a month for free at www.tortoisemedia.com/activate/tne-guest and use the code TNEGUEST.

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.