The future of warfare might involve autonomous weapon systems, such as the BAE Taranis, although some are unsettled by the idea of giving machines lethal capabilities. Mike Young
Doctoral Candidate in Robot Ethics at University of Canterbury
The roles played by autonomous weapons will be discussed at a meeting in Geneva, Switzerland, this week which could have far reaching ramifications for the future of war.
The second Expert Meeting on Lethal Autonomous Weapons Systems (LAWS) will discuss issues surrounding what have been dubbed by some as “killer robots”, and whether they ought to be permitted in some capacity or perhaps banned altogether.
The discussion falls under the purview of the Convention on Certain Conventional Weapons (CCW), which has five protocols already covering non-detectable fragments, mines and booby traps, incendiary weapons, blinding lasers and the explosive remnants of war.
Australia and other parties to the CCW will consider policy questions about LAWS and whether there should be a sixth protocol added to the CCW that would regulate or ban LAWS.
There are generally two broad views on the matter:
LAWS should be put in the same category as biological and chemical weapons and comprehensively and pre-emptively banned.
LAWS should put in the same category as precision-guided weapons and regulated.
The Campaign to Stop Killer Robots (CSKR) argues for a ban on LAWS similar to the ban on blinding lasers in Protocol IV of the CCW and the ban on anti-personnel landmines in the Ottawa Treaty. They argue that killer robots must be stopped before they proliferate and that tasking robots with human destruction is fundamentally immoral.
Others disagree, such as Professor Ron Arkin of Georgia Tech in the US, who argues that robots should be regarded more as the next generation of “smart” bombs.
They are potentially more accurate, more precise, completely focused on the strictures of International Humanitarian Law (IHL) and thus, in theory, preferable even to human war fighters who may panic, seek revenge or just plain stuff up. Malaysian Airlines flight MH17, after all, appears to have been shot down by “meaningful human control”.
Only five nations currently support a ban on LAWS: Cuba, Ecuador, Egypt, Pakistan and the Holy See. None are not known for their cutting edge robotics. Japan and South Korea, by contrast, have big robotics industries. South Korea has already fielded the Samsung SGR-A1 “sentry robots” on its border with North Korea.
Not everyone is thrilled about the idea of allowing autonomous weapons systems loose on, or off, the battlefield. Global Panorama/Flickr, CC BY-SA
At the end of last year’s meeting, most nations were non-committal. There were repeated calls for better definitions and more discussions, such as from Sweden, Germany, Russia and China.
Few nations have signed up to the CSKR’s view that “the problem” has to be solved quickly before it is too late. Most diplomats are asking what exactly would they like to ban and why?
The UK government has suggested that existing international humanitarian law provides sufficient regulation. The British interest is that BAE Systems is working on a combat drone calledTaranis, which might be equipped with lethal autonomy and replace the Tornado.
LAWS are already regulated by existing International Humanitarian Law. According to the Red Cross, no expert disputes this. LAWS that cannot comply with IHL principles, such as distinction and proportionality are already illegal. LAWS are already required to go through Article 36 reviewbefore being fielded, just like any other new weapon.
As a result, the suggestion by the CSKR that swift action is required is not, as yet, gaining diplomatic traction. As their own compilation report shows, most nations have yet to grasp the issue, let alone commit to policy.
The real problem for the CSKR is that a LAWS is a combination of three hard to ban components:
Sensors (such as radars) which have legitimate civilian uses
“Lethal” cognition (i.e. computer software that targets humans), which is not much different from “non-lethal” cognition (i.e. computer software that targets “virtual” humans in a video game)
“Lethal” actuators (i.e. weapons such as Hellfire missiles), which can also be directly controlled by a human “finger on the button” and are not banned per se.
Japan has already indicated it will oppose any ban on “dual-use” components of a LAWS. The problem is that everything in a LAWS is dual-use – the “autonomy” can be civilian, the lethal weapons can be human operated, for example. What has to be regulated or banned is a combination of components, not any one core component.
Close In Weapon Systems already autonomously react to and shoot down incoming missiles without requiring a human to pull the trigger. Stephanie Smith/U.S. Navy
Out of the loop?
The phrase “meaningful human control” has been articulated by numerous diplomats as a desired goal of regulation. There is much talk of humans and “loops” in the LAWS debate:
Human “in the loop”: the robot makes decisions according to human-programmed rules, a human hits a confirm button and the robot strikes. Examples are the Patriot missile systemand Samsung’s SGR-A1 in “normal” mode.
Human “on the loop”: the robot decides according to human-programmed rules, a human has time to hit an abort button, and if the abort button is not hit, then robot strikes. Examples would be the Phalanx Close-In Weapon System or the Samsung SGR-A1 in “invasion” mode, where the sentry gun can operate autonomously.
Human “off the loop”: the robot makes decisions according to human-programmed rules, the robot strikes, and a human reads a report a few seconds or minutes later. An example would be any “on the loop” LAWS with a broken or damaged network connection.
It could be that a Protocol VI added to the CCW bans “off the loop” LAWS, for example. Although the most widely fielded extant LAWS are “off the loop” weapons such as anti-tank and anti-ship mines that have been legal for decades.
As such, diplomats might need a fourth category:
- Robot “beyond the loop” the robot decides according to rules it learns or creates itself, the robot strikes, and the robot may or may not bother to let humans know.
The meeting taking place this week will likely wrestle with these definitions, and it will be interesting to see if any resolution or consensus emerges, and what implications that might have on the future of war.
Tues 31 March 2 – 4 pm James HightUndercroft by Dr Ronald C Arkin, an IEEE SSIT Distinguished Lecturer
ABSTRACT: A recent meeting (May 2014) of the United Nations in Geneva regarding the Convention on Certain Conventional Weapons considered the many issues surrounding the use of lethal autonomous weapons systems from a variety of legal, ethical, operational, and technical perspectives. Over 80 nations were represented and engaged in the discussion. This talk reprises the issues the author broached regarding the role of lethal autonomous robotic systems and warfare, and how if they are developed appropriately they may have the ability to significantly reduce civilian casualties in the battlespace. This can lead to a moral imperative for their use due to the enhanced likelihood of reduced noncombatant deaths. Nonetheless, if the usage of this technology is not properly addressed or is hastily deployed, it can lead to possible dystopian futures. This talk will encourage others to think of ways to approach the issues of restraining lethal autonomous systems from illegal or immoral actions in the context of both International Humanitarian and Human Rights Law, whether through technology or legislation.
BIOGRAPHY: Ronald C. Arkin is Regents' Professor and Associate Dean for Research in the College of Computing at Georgia Tech. He served as STINT visiting Professor at KTH in Stockholm, Sabbatical Chair at the Sony IDL in Tokyo, and in the Robotics and AI Group at LAAS/CNRS in Toulouse. Dr. Arkin's research interests include behavior-based control and action-oriented perception for mobile robots and UAVs, deliberative / reactive architectures, robot survivability, multiagent robotics, biorobotics, human-robot interaction, robot ethics, and learning in autonomous systems. Prof. Arkin served on the Board of Governors of the IEEE Society on Social Implications of Technology, the IEEE Robotics and Automation Society (RAS) AdCom, and is a founding co-chair of IEEE RAS Technical Committee on Robot Ethics. He is a Distinguished Lecturer for the IEEE Society on Social Implications of Technology and a Fellow of the IEEE.
A University of Canterbury engineering PhD student is researching sports, such as table tennis, to ensure closer games for both better and less skilled players.
David Altimira has been researching in the university’s HIT Lab NZ to balance a game by giving the weaker player greater chances of success. He will present a paper to the 11th Advances in Computer Entertainment Technology Conference in Madeira, Portugal, in November.
As part of his project Altimira, who is collaborating with researchers from Melbourne’s RMIT University, changed the size of the table tennis bat and the table to make it more difficult for the better of the two players.
His supervisor, world croquet champion and University sports researcher Dr Jenny Clarke, says sensors were mounted under the table to detect and could project onto the table where the ball bounced and measured factors such as length of rallies and ball speed. He also used a ceiling-mounted camera to monitor other dynamics.
Dr Clarke says the research was aimed at getting more young New Zealanders to exercise. Nearly 11 percent of children in the 10 to 14 age group are obese and in adulthood, the proportion swells to 28 per cent.
David also had a better player using a half-sized bat or for that player to have to aim for a target area much smaller than the usual size of a table tennis table if their lead stretched out to six points.
A challenge can be more important than competition itself, especially if it is for fun. People might not like to play for competition but enjoy being challenged.
The motive behind this research could benefit families where the younger children are generally less skilled. The same applies in a social setting where some friends are much more skilled than others. This new system makes it competitive and fun for everyone.’’
Altimira, who has studied computer science in Barcelona and an internship in Chicago, says he digitally reconfigured one side of the table tennis table to make the target area more restricted for the better player to balance the game up.
This made it harder for the good player so overall we helped encourage people to do more physical exercise, which has mental, health and social benefits. Making it harder does not necessarily mean people will exercise more but by making physical activity more engaging it can increase people’s physical activity.’’