Thousands of scientists specialising in artificial intelligence (IA) have declared their intention not to participate in the development or manufacture of autonomous robots who would be capable of attacking.
Demis Hassabis from Google DeepMind and Elon Musk, chief of SpaceX, are among the 2,400 signatories who have made this pledge, which aims to dissuade military organisation from constructing autonomous lethal weapons, which they believe could pave the way for a new generation of weapons of mass destruction.
Orchestrated by the Boston-based The Future of Life organisation, the pledge calls on governments to agree on standards, laws and rules that oppose and effectively ban the development of killer robots. In the absence of such measures, the signatories pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”
The activists warn that rapid progress in AI suggests that it is now possible to build sophisticated weapons capable of identifying, following and shooting human targets, without the consent of a human supervisor. For many researchers, giving machines the choice over who lives and who dies is a moral line that armies should not cross, or even approach. British ministers have already declared that Great Britain will not develop autonomous lethal weapons, and that its armed forces will always control their weapons.
“We cannot stop a determined person from building autonomous weapons, just as we cannot stop a determined person from building a chemical weapon,” explained one of the signatories, Toby Walsh, AI professor in the University of New South Wales in Sydney, Australia. “But if we don’t want rogue states or terrorists to have easy access to autonomous weapons, we must ensure they are not sold openly by arms companies.”