The question, at this point, is who will be first with drones on their own. And when. The time is not far off.
The Pentagon has already laid down the ground rules on how and when to use autonomous systems. Called “Directive 3000.9” it established Defense Department policy and assigned responsibility for the use of autonomous and semi-autonmous functions in weapons systems, including maned and unmanned platforms. It also established “guidelines designed to minimize the probability and consequences of failures” in such weapons systems “that could lead to unintended engagements.”
The bottom line is that while the weapons systems might be autonomous, they must be controllable by humans. Since late 2002, Canning has been working on the autonomous use of weapons by robotic systems. “To the surprise of most, not only have I figured out how to have robots figure out for themselves when to pull the trigger, but how to do this while keeping the lawyers happy ».
Some non-governmental organizations say “allowing life or death decisions to be made by machines crosses a fundamental moral line. Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.” The use of fully autonomous weapons “would create an accountability gap as there is no clarity on who would be legally responsible for a robot’s actions: the commander, programmer, manufacturer, or robot itself? Without accountability, these parties would have less incentive to ensure robots did not endanger civilians and victims would be left unsatisfied that someone was punished for the harm they experienced.”
Canning disagrees. “My issue is that they assume from the beginning – even with their name – that we will be designing machines to autonomously target and kill people,” he said. “Nothing could be further from the truth! We need to have the other side of the issue told so that the rest of the world can see the fallacy of the “‘Stop Killer Robots’ campaign.” Canning said his approach is based on a statement from a now-retired lawyer who was with the Pentagon’s Office of General Counsel. “The ultimate goal in warfare is not to kill the enemy, but to bring hostilities to a complete and lasting close as quickly, and as humanely, as possible,” said Canning, quoting Department of Defense attorney Col. W. Hays Parks, 12 years ago at a meeting at Dahlgren. Current technology — including existing drones and even manned aircraft (think the killing of 30 at the Doctors Without Borders hospital in Afghanistan by a U.S. AC-130 gunship) — is far from fallible. So there’s a lot to consider.
There are two main types of systems covered by the directive.
Autonomous weapons systems are those that, once activated, “can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”
Semi-autonomous weapons systems are those that, once activated, are intended to only “engage individual targets or specific target groups that have been selected by a human operator.”
The Pentagon directive is explicit in the role of these systems. “Human-supervised autonomous weapon systems may be used to select and engage targets, with the exception of selecting humans as targets, for local defense to intercept attempted time-critical or saturation attacks.” And then, only for static defense of manned installations, or onboard defense of manned platforms. Furthermore, the directive holds that “autonomous weapon systems may be used to apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets.” Semi-autonomous weapons systems, meanwhile, “may be used to apply lethal or non-lethal, kinetic or non-kinetic force,” according to the directive. Such systems onboard or integrated with unmanned platforms “must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.” “In our approach we target either the bow, or the arrow, but not the human archer,” says Canning, adding that the U.S. military already has such capability, like the Aegis missile system aboard Navy vessels in auto-special engagement mode. The system has never been used. “The scope of our approach is broader,” he says. “We go to pains to try to separate an enemy combatant from his weapon before going after his weapon. We have taken great care in addressing the Law of War’s Principles of Distinction, Proportionality, and Precautions.”For instance, by using “directed energy” weapons, U.S. troops can disarm an enemy without necessarily killing, Canning says. “We think, however, that we have a better shot at reducing collateral damage because our machines will never autonomously target people, and can use non-traditional weapons,”Canning says. “We will, however, retain the ability to roll a human operator into the control loop so that he may make a targeting decision on a person, if he needs to. Thus, a human will have control under those circumstances, and will also set the parameters that will be used by our autonomous machines when they target the other guy’s hardware.”
From: The Trampa tribune. Read full article here.