Top Logo

katy-k-profile.jpg
HomeNews

Christoph Bartneck's interview with National Television Station in Netherlands - "Japanese people are not as favourable toward robots as often assumed"

Our Associate Professor Christoph Bartneck recently had an interview with vpro, a national television station in Netherlands, as a part of the station's documentary series on robot. Dr. Bartneck shared his views on humanoid and people's reaction and expectation toward them, especially how Japanese people take it and why.

The full version of the interview is available here: http://www.vpro.nl/programmas/tegenlicht/japanners-zijn-negatiever-dan-gedacht.html

(Please not the interview is written in Dutch. Translated version by Google is here: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fwww.vpro.nl%2Fprogrammas%2Ftegenlicht%2Fjapanners-zijn-negatiever-dan-gedacht.html&edit-text=&act=url)

 

MHIT July Intake Application and Scholarships are now open

We are delighted to announce that Master of Human Interface Technology application for July 2015 intake and related scholarships are now open.

Please see more details and apply at: http://hitlabnz.org/index.php/jobs

 

CNN reported an interview with our PhD Candidate, Sean Welsh: Killer robots: The future of war?

 

Sean Welsh, our PhD candidate, has been working on a topic of Robot Ethci and his research has been getting media attention recently. 

He had an interview with The Conversation UK and the article has been reported by CNN.

The full article can be found here.

 

Machines with guns: debating the future of autonomous weapons systems

The future of warfare might involve autonomous weapon systems, such as the BAE Taranis, although some are unsettled by the idea of giving machines lethal capabilities. Mike Young

 

 

AUTHOR

  1. Sean Welsh

    Doctoral Candidate in Robot Ethics at University of Canterbury

The roles played by autonomous weapons will be discussed at a meeting in Geneva, Switzerland, this week which could have far reaching ramifications for the future of war.

The second Expert Meeting on Lethal Autonomous Weapons Systems (LAWS) will discuss issues surrounding what have been dubbed by some as “killer robots”, and whether they ought to be permitted in some capacity or perhaps banned altogether.

The discussion falls under the purview of the Convention on Certain Conventional Weapons (CCW), which has five protocols already covering non-detectable fragments, mines and booby traps, incendiary weapons, blinding lasers and the explosive remnants of war.

Australia and other parties to the CCW will consider policy questions about LAWS and whether there should be a sixth protocol added to the CCW that would regulate or ban LAWS.

There are generally two broad views on the matter:

  1. LAWS should be put in the same category as biological and chemical weapons and comprehensively and pre-emptively banned.

  2. LAWS should put in the same category as precision-guided weapons and regulated.

The Campaign to Stop Killer Robots (CSKR) argues for a ban on LAWS similar to the ban on blinding lasers in Protocol IV of the CCW and the ban on anti-personnel landmines in the Ottawa Treaty. They argue that killer robots must be stopped before they proliferate and that tasking robots with human destruction is fundamentally immoral.

Others disagree, such as Professor Ron Arkin of Georgia Tech in the US, who argues that robots should be regarded more as the next generation of “smart” bombs.

They are potentially more accurate, more precise, completely focused on the strictures of International Humanitarian Law (IHL) and thus, in theory, preferable even to human war fighters who may panic, seek revenge or just plain stuff up. Malaysian Airlines flight MH17, after all, appears to have been shot down by “meaningful human control”.

Only five nations currently support a ban on LAWS: Cuba, Ecuador, Egypt, Pakistan and the Holy See. None are not known for their cutting edge robotics. Japan and South Korea, by contrast, have big robotics industries. South Korea has already fielded the Samsung SGR-A1 “sentry robots” on its border with North Korea.

 

Not everyone is thrilled about the idea of allowing autonomous weapons systems loose on, or off, the battlefield. Global Panorama/FlickrCC BY-SA

Click to enlarge

 

Definitions

At the end of last year’s meeting, most nations were non-committal. There were repeated calls for better definitions and more discussions, such as from Sweden, Germany, Russia and China.

Few nations have signed up to the CSKR’s view that “the problem” has to be solved quickly before it is too late. Most diplomats are asking what exactly would they like to ban and why?

The UK government has suggested that existing international humanitarian law provides sufficient regulation. The British interest is that BAE Systems is working on a combat drone calledTaranis, which might be equipped with lethal autonomy and replace the Tornado.

LAWS are already regulated by existing International Humanitarian Law. According to the Red Cross, no expert disputes this. LAWS that cannot comply with IHL principles, such as distinction and proportionality are already illegal. LAWS are already required to go through Article 36 reviewbefore being fielded, just like any other new weapon.

As a result, the suggestion by the CSKR that swift action is required is not, as yet, gaining diplomatic traction. As their own compilation report shows, most nations have yet to grasp the issue, let alone commit to policy.

The real problem for the CSKR is that a LAWS is a combination of three hard to ban components:

  1. Sensors (such as radars) which have legitimate civilian uses

  2. “Lethal” cognition (i.e. computer software that targets humans), which is not much different from “non-lethal” cognition (i.e. computer software that targets “virtual” humans in a video game)

  3. “Lethal” actuators (i.e. weapons such as Hellfire missiles), which can also be directly controlled by a human “finger on the button” and are not banned per se.

Japan has already indicated it will oppose any ban on “dual-use” components of a LAWS. The problem is that everything in a LAWS is dual-use – the “autonomy” can be civilian, the lethal weapons can be human operated, for example. What has to be regulated or banned is a combination of components, not any one core component.

 

Close In Weapon Systems already autonomously react to and shoot down incoming missiles without requiring a human to pull the trigger. Stephanie Smith/U.S. Navy

Click to enlarge

 

Out of the loop?

The phrase “meaningful human control” has been articulated by numerous diplomats as a desired goal of regulation. There is much talk of humans and “loops” in the LAWS debate:

  • Human “in the loop”: the robot makes decisions according to human-programmed rules, a human hits a confirm button and the robot strikes. Examples are the Patriot missile systemand Samsung’s SGR-A1 in “normal” mode.

  • Human “on the loop”: the robot decides according to human-programmed rules, a human has time to hit an abort button, and if the abort button is not hit, then robot strikes. Examples would be the Phalanx Close-In Weapon System or the Samsung SGR-A1 in “invasion” mode, where the sentry gun can operate autonomously.

  • Human “off the loop”: the robot makes decisions according to human-programmed rules, the robot strikes, and a human reads a report a few seconds or minutes later. An example would be any “on the loop” LAWS with a broken or damaged network connection.

It could be that a Protocol VI added to the CCW bans “off the loop” LAWS, for example. Although the most widely fielded extant LAWS are “off the loop” weapons such as anti-tank and anti-ship mines that have been legal for decades.

As such, diplomats might need a fourth category:

  • Robot “beyond the loop” the robot decides according to rules it learns or creates itself, the robot strikes, and the robot may or may not bother to let humans know.

The meeting taking place this week will likely wrestle with these definitions, and it will be interesting to see if any resolution or consensus emerges, and what implications that might have on the future of war.

 

 

 

the IEEE SSIT Talk on Lethal Autonomous Robots and the Plight of the Noncombatant by Dr Ronald C Arkin

Tues 31 March   2 – 4 pm    James HightUndercroft by Dr Ronald C Arkin, an IEEE SSIT Distinguished Lecturer

 

ABSTRACT: A recent meeting (May 2014) of the United Nations in Geneva regarding the Convention on Certain Conventional Weapons considered the many issues surrounding the use of lethal autonomous weapons systems from a variety of legal, ethical, operational, and technical perspectives. Over 80 nations were represented and engaged in the discussion. This talk reprises the issues the author broached regarding the role of lethal autonomous robotic systems and warfare, and how if they are developed appropriately they may have the ability to significantly reduce civilian casualties in the battlespace. This can lead to a moral imperative for their use due to the enhanced likelihood of reduced noncombatant deaths. Nonetheless, if the usage of this technology is not properly addressed or is hastily deployed, it can lead to possible dystopian futures. This talk will encourage others to think of ways to approach the issues of restraining lethal autonomous systems from illegal or immoral actions in the context of both International Humanitarian and Human Rights Law, whether through technology or legislation.

 

BIOGRAPHY: Ronald C. Arkin is Regents' Professor and Associate Dean for Research in the College of Computing at Georgia Tech. He served as  STINT visiting Professor at KTH in Stockholm, Sabbatical Chair at the Sony IDL in Tokyo, and in the Robotics and AI Group at LAAS/CNRS in Toulouse. Dr. Arkin's research interests include behavior-based control and action-oriented perception for mobile robots and UAVs, deliberative / reactive architectures, robot survivability, multiagent robotics, biorobotics, human-robot interaction, robot  ethics, and learning in autonomous systems. Prof. Arkin served on the Board of Governors of the IEEE Society on Social Implications of Technology, the IEEE Robotics and Automation Society (RAS) AdCom, and is a founding co-chair of IEEE RAS Technical Committee on Robot Ethics. He is a Distinguished Lecturer for the IEEE Society on Social Implications of Technology and a Fellow of the IEEE.

 

Page 1 of 21

If you like it, share it!

Contact

Address
HIT Lab NZ
Engineering and Science Annex
University of Canterbury
Ilam, Christchurch
New Zealand

Phone
(64 3) 364 2349

Email
info@hitlabnz.org

University of Canterbury

University of Washington

Canterbury Development Corporation