UK developing ‘killer robot’ drones though it says it isn’t – study — RT UK News

The British government has been secretly funding research with a view to developing autonomous ‘killer robot’ drones, despite making public statements to the contrary, an anti-drone campaign group has claimed.

Drone Wars UK claims that the Defence and Security Accelerator (DASA) is funding research for the weapons systems which could make life-or-death decisions without any human input.

In an unsettling new report, the group highlighted the Taranis drone, developed over a decade by BAE Systems and the UK Ministry of Defence. The Taranis can fly, plot its own routes and locate targets autonomously – and has cost a cool £200 million so far.

The year-long study by Drone Wars UK also uncovered multiple similar research and development programs that they say are being funded by the British MoD, despite public denials of any plans to develop the deadly machines.

Official MoD policy states that the UK is opposed to the development of autonomous weapons systems and that the government has “no intention of developing them.”

Read more


US Air Force MQ-9 Reaper drone © Josh Smith

That claim doesn’t, however, appear to stand up to scrutiny.

Peter Burt, who authored the new report, said there is now “tangible evidence” that the MoD, along with military contractors and universities in the UK, is “actively engaged in research and development of the underpinning technology with the aim of using it in military applications.”

Burt cited the Taranis drone as an example of a drone with “advanced autonomous capabilities” and said that the development of a “truly autonomous” lethal drone in “the foreseeable future” was now a “real possibility.”

Instead of funding killer robots, the British government should be supporting international initiatives to prevent the development and use of autonomous weapons and should be “investigating the enormous potential of artificial intelligence to identify potential conflict areas and prevent wars before they start,” Burt argued.

An MoD spokesperson denied that there are plans to develop any weapons systems which would operate without input from humans, saying its weapons will “always be under human control as an absolute guarantee of oversight, authority and accountability.”

Actions speak louder than words, however, and the UK has declined to support proposals brought forward at the United Nations which would ban the use of fully autonomous drones.

The report also claims that a “predictive cognitive control system” is undergoing trials at the Joint Forces Intelligence Centre at RAF Wyton in Cambridgeshire. Using massive amounts of data, the system makes predictions that would be of “direct operational relevance” to the British armed forces.

Read more


Physicist Stephen Hawking. © Lucas Jacksonn

What happens if such a machine is given incorrect data, either by mistake or design? Experts have repeatedly warned that putting life-or-death decisions in the hands of autonomous machines could have disastrous outcomes for humanity.

Speaking at the Web Summit in Lisbon last year, physicist Stephen Hawking warned that if it is not used properly, artificial intelligence “could be the worst event in the history of our civilization.” Hawking specifically cited the advent of powerful autonomous weapons in his speech.

“AI could develop a will of its own – a will that is in conflict with our own – and which could destroy us.”

In 2015, hundreds of academics in Canada and Australia signed open letters to their respective governments calling for a preemptive ban on the development and use of the deadly machines.

Like this story? Share it with a friend!

Via RT. This piece was reprinted by RINF Alternative News with permission or license.