​Robots in the battlefield: Georgia Tech professor thinks AI can play a vital role

0
709

0

artificial-intelligence-job-killer-or-your-next-boss-1.png

A pledge against the use of autonomous weapons was in July signed by over 2,400 individuals working in artificial intelligence (AI) and robotics representing 150 companies from 90 countries.

The pledge, signed at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm and organised by the Future of Life Institute, called on governments, academia, and industry to “create a future with strong international norms, regulations, and laws against lethal autonomous weapons”.

The institute defines lethal autonomous weapons systems — also known as “killer robots” — as weapons that can identify, target, and kill a person, without a human “in-the-loop”.

But according to Professor Ronald C Arkin, a Regents’ Professor and the Director of the Mobile Robot Laboratory at the Georgia Institute of Technology, the outright banning of robots and AI isn’t the best way forward.

Arkin told D61+ Live on Wednesday that instead of banning autonomous systems in war zones, they instead should be guided by strong legal and legislative directives.

He isn’t alone. Citing a recent survey of 27,000 people by the European Commission, Arkin said 60 percent of respondents felt that robots should not be used for the care of children, the elderly, and the disabled, even though this is the space that most roboticists are playing in.

Despite the killer robot rhetoric, only 7 percent of respondents thought that robots should be banned for military purposes.

The United Nations after six years of officially working on it, however, is yet to define what a lethal autonomous weapon is, or what meaningful human control is, let alone develop an ethical framework for nations to follow in their military robotics push.

“How are we going to ensure that they behave themselves, that they follow our moral and ethical norm? Which in this case are encoded, not just as norms, but as laws, in international humanitarian law, which is the law that determines the legal and ethical way to kill each other in the battlefield,” Arkin explained. “It’s kind of strange that we have spent thousands of years coming up with these codes to find acceptable methods for killing each other.”

There are around 60 nations currently working on the application of robotics in war, including the United States, China, South Korea, and Israel, but according to Arkin, many of the fielded platforms are already becoming lethal.