Noel Sharkey, Professor of Artificial Intelligence and Robotics
As part of Sky News’ series Robot Revolution, Artificial Intelligence (AI) expert Professor Noel Sharkey looks at what impact machines could have on future battlefields. Do we want robots to have the final decision about who lives and who dies in military conflict zones? This is a question generating intense discussion amongst the world’s nations and militaries at the United Nations.
It also the subject of an open letter from the Future of Life Institute, and signed by thousands of artificial intelligence experts, calling for a ban on autonomous weapons systems also known as “killer robots”. No, this isn’t the plot of a science fiction movie. Several nations including the US, UK, China, Russia, Israel and Republic of Korea have been developing new weapons that can go out, find targets and kill them without human supervision.
They look like futuristic fighter jets, tanks, ships and submarines. Supporters of these weapons say the battlefield is becoming too fast and too complex for human control. When communications are jammed robot weapons could still complete their missions. And sending robots to do our fighting will keep our own young people out of harm’s way. They argue that they can multiply their forces by releasing swarms of these robots from land, sea and air. But this is short-sighted.
A major problem at present is that the robot weapons cannot be guaranteed to comply with the laws governing conflict, such as the Geneva conventions. Proponents argue that one day they may eventually be more accurate than human soldiers. But the scientific challenges are enormous and there is no way to tell if and when that might happen. Once billions of dollars have been spent on their development it may be too late to stop everyone from using them, ready or not.
An even bigger challenge for a machine is the decision about an attack being proportional to military advantage. In other words, is the target worth killing civilians and damaging civilian property? This is not a mathematical problem. You cannot say how many innocent women, children and old people the killing of Osama bin Laden was worth. It takes a very experienced commander in the field to weigh up the pros and cons of any use of violent force in particular circumstances. Would we really want to delegate such decisions to a machine?
The missing link in these arguments is consideration of the global consequences of using robot weapons, as pointed out by the International Committee for Robot Arms Control. Those who want to use the weapons and believe that it will give them a military edge need to realise that their enemy’s technology is not frozen in time. We can expect an arms race with robot weapons in many forms proliferating widely. The consequences are unimaginable. Forget about keeping our young men and women, or any of us, out of harm’s way.
One group that aims to stop these developments is the Campaign to Stop Killer Robots. They are an international coalition of 54 non-governmental organisations formed in 2012 to persuade the United Nations to reject the use of these weapons through an internationally legally-binding treaty. This would make the position clear to everyone, including powerful dictators and rogue groups, that if they used robot weapons then the world community of nations would unite against them.
Stigmatising their use, as we have done with chemical weapons, and stopping legal export is the best way to prevent an arms race and mass proliferation. And it is certainly the best way to stem the tide of the perilous path towards the automation of warfare.