Cyberwarfare / Nation-State Attacks , Fraud Management & Cybercrime , Next-Generation Technologies & Secure Development
Should 'Killer Robots' Be Banned?
Warning: Autonomous Weapons Systems Can Be Made Lethal, But Not Hack-ProofHow big is the step from humans using drones to kill other humans to building lethal autonomous weapons systems that can kill on their own?
See Also: How to Take the Complexity Out of Cybersecurity
Ethically and technologically, that's a huge leap. But military planners are now working to build autonomous weapons that some call "killer robots."
The UN and human rights organizations have been calling for the prohibition of such weapons.
"We must address the legal, moral and ethical implications posed by the development of lethal autonomous weapons systems," UN Secretary General António Guterres said last week. "It is my deep conviction that machines with the power and discretion to take lives without human involvement must be prohibited by international law."
We need to address the legal, moral and ethical implications posed by the development of lethal autonomous weapons systems.
— António Guterres (@antonioguterres) May 28, 2020
Machines with the power and discretion to take lives without human involvement must be prohibited by international law. https://t.co/9X3uy9Fb9f
Rise of the Drones
Already, almost 100 nations have their own military drones, the Wall Street Journal has reported, pegging the number of military unmanned aerial vehicles worldwide at 30,000 or more.
Numerous countries - including China, Israel, Russia, South Korea, the U.K. and U.S. - have also been experimenting with weapons systems that have a degree of autonomy in how they select and attack targets.
Given that society can't even seem to design a secure, internet-connected toaster, the odds don't look good that autonomous military systems would fare any better against potential hackers.
Pick your favorite sentient-robot-gone awry Hollywood movie moment to provide a cautionary note. Take the Enforcement Droid (Series 209) from 1988's "RoboCop." In an unforgettable product demo, an executive holds up a gun to the walking and talking military droid, which replies that he has 20 seconds to put down his weapon. The executive quickly complies. But the robot still blows him away.
Prepare for Lawsuits
While it's fiction, that scenario might have the more legally minded already asking: Who's at fault?
"It's unclear who, if anyone, could be held responsible for unlawful acts caused by a fully autonomous weapon - the programmer, manufacturer, commander or machine itself. This accountability gap would make it is difficult to ensure justice, especially for victims," says the Campaign to Stop Killer Robots, an international group that has been calling on governments to pre-emptively ban lethal autonomous robotics.
"We could build robots that can kill today," according to campaign member Laura Nolan, who formerly worked at Google as a software engineer.
Nolan quit Google after working on Project Maven, a U.S. government contract focused on using artificial intelligence to more rapidly analyze drone footage and differentiate objects from people. While Google has dropped the contract, such work continues. (To be clear, Google does not appear to be helping to develop any autonomous weapons systems.)
Autonomous Weapon Projects
The development of fully autonomous military systems is well underway in the U.S. and other nations around the world.
The U.S. Defense Advanced Research Projects Agency, or DARPA, for example, has been researching "swarm autonomy" for its "Offensive Swarm-Enabled Tactics," which "envisions future small-unit infantry forces using swarms comprising upwards of 250 small unmanned aircraft systems and/or small unmanned ground systems to accomplish diverse missions in complex urban environments."
Likewise, the U.S Navy is researching how swarmboats can be used for mine-clearing and other dangerous tasks.
Other projects that seek autonomous capabilities include:
- Anaconda (AN-2) unmanned surveillance vessel: The U.S. Navy's plans for developing this "autonomous, AI watercraft" include giving it the ability "to follow terrain and way point markers, be capable of collision avoidance without direct input, and to perform tactical maneuvers and loiter in area for long periods of time, all without human intervention," says McLean, Virginia-based builder Swiftships.
- Armata T-4 tank: Russia's next-generation tank, which may be the basis for a new, fully autonomous weapon system, appeared to inadvertently grind to a halt when it first debuted in 2015, the Guardian reported. But it notes that development plans call for the tank to respond to enemy fire, regardless of whether it has a crew.
- Manta Ray: This U.S. DARPA project is focused on designing "future unmanned underwater vehicles that are capable of both long-duration missions and large payload capacity."
- MAST-13: The British Royal Navy's 42-foot, unmanned, autonomous vessel is meant to act as a water-borne drone to help identify mines and targets and to defend ships. Like the Anaconda, the current version of the project involves the vehicles being remotely controlled, rather than fully autonomous.
- Sea Hunter: Developed by DARPA, this "highly autonomous unmanned ship" is 132 feet long and designed to track quiet, diesel electric submarines from the surface of the water for two or three months at a time and include "autonomous compliance with maritime laws and conventions for safe navigation, autonomous system management for operational reliability, and autonomous interactions with an intelligent adversary.
'Ethical AI' for Combat?
The U.S. government has been increasingly talking about such weapon systems and its intention to use AI to guide them. In early 2019, the U.S. Defense Department released AI principles, promising to develop a "vision and guiding principles for AI ethics and safety" for warfare purposes that would be guided by "a wide range of experts and advisers from across academia, the private sector, and the international community."
The Pentagon's AI strategy also states: "We will also continue to undertake research and adopt policies as necessary to ensure that AI systems are used responsibly and ethically."
Last October, the Defense Innovation Board, an independent committee that advises the U.S. defense secretary, released its "Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense" report, with recommendations that touch on the ethical use of AI in both combat and non-combat environments.
Beyond demanding that any use of AI be both responsible and equitable - including avoiding bias in both combat and non-combat AI systems - the Defense Innovation Board recommends that all systems be traceable - so their methodologies can be easily audited - as well as reliable, meaning they would be well-tested for all potential environments and built to function reliably for the entirety of their lifespan.
The Defense Innovation Board says such systems must also be governable: "DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior."
But are these engineering mandates achievable?
Scope Creep
Some government officials acknowledge the potential risks posed by autonomous weapons.
"Autonomy makes a lot of people nervous," Steve Olsen, deputy branch head of the Navy's mine warfare office, told Defense News last year. "But the flip side of this is that there is one thing that we have to be very careful of, and that's that we don't over-trust. Everybody has seen on the news [when people] over-trusted their Tesla car. That is something that we can't do when we talk about weapons systems."
Olsen says that as with current military drone doctrine, there always needs to be a human ready to take the controls - or in the case of a drone, to both authorize and actually then attempt to kill someone.
But don't humans have a propensity to over-trust systems that promise automation, as multiple Tesla driver deaths have demonstrated? Also, scope creep - conceptual or otherwise - remains a significant risk.
In an era when military hardware gets repurposed and sold for law enforcement purposes, could autonomous robots get used for border control, or to face down bank robbers or deter protestors, whatever the end user license agreement might state?
Even without weighing such risks, Nolan of the Campaign to Stop Killer Robots says there's a more intractable problem: The limits of software and hardware engineering.
"We cannot build a safe robot that can't be hacked, that works predictably in most or all situations, that is free of errors, and that can reliably manage the complexities involved in international law and the laws of war," she says. "That is why a treaty banning their development and use is urgently needed."