Artificial Intelligence And Weapons – Intense Ethical Questions

by | Aug 2, 2018 | News, Vincentian Family at the U.N. | 1 comment

Video by the International Committee of the Red Cross

I’m not a real fan of Sci-Fi. Yes, I loved all the convenient contraptions in the Jetsons’ home and marveled at the idea of Captain Kirk getting beamed up. But thinking of “unfriendlies” in movies such as Terminator, Alien, and Star Trek (V’Ger) gives me a chill.

We should all develop a healthy sense of concern as science moves into the arena of artificial intelligence (AI) and warfare.  We could be on the verge of independent-thinking weapons.

We are not quite there yet. But we could be getting close.

“Could we delegate life and death to a machine? Who will be responsible, then, for violations of human rights? The company who developed the robot? The military? The engineers? The robots?” – Veronique Christory, weapons adviser for the International Committee of the Red Cross.

As some countries continue to develop artificial intelligence, robotics, and prototypes of autonomous weapons, it’s time for the world to reflect seriously on the potential impact of lethal autonomous weapons (LAWS).

Religious at the UN (RUN), recently heard about concerns related to LAWS from Veronique Christory (pictured right). Ms. Christory is a weapons adviser for the International Committee of the Red Cross (ICRC).

We often think of the ICRC as responders with humanitarian aid in emergencies.  But because conflict also impacts the suffering it attempts to relieve, the ICRC plays a leading role in the promotion and development of law regulating the use of weapons.

International humanitarian law governs the choice of weapons and prohibits or restricts the use of certain weapons.  The ICRC attempts to ensure that the use of new weapons, means or methods of warfare comply with the rules of international humanitarian law.  (This can include placing parameters or containment on certain weapons’ use).

Christory mentioned the development of artificial intelligence and its application to weapons as the third significant revolution in the history of weapons.  The first two were the development of gunpowder and of nuclear power.

The 21st Century poses some crucial questions about weapons, she noted. According to Christory, fully or lethal autonomous weapons would have the capacity to identify, select and target victims.

I would ask, “what if they evolve to that level in the future and go rogue?  Or, if such weapons fell into the hands of unsavory characters? Or hackers?”

Some argue that weapons of artificial intelligence could protect people by targeting precise areas. But imagine a high-tech, efficient, fully autonomous weapon independently choosing specifically where and upon whom it is aimed.  The Red Cross has called upon nations to ban them.

“Fully autonomous weapons are illegal and unethical,” said Christory.

The development of such autonomous artificial intelligence challenges the whole international system, in which humans have been decision makers, Christory pointed out. “Could we delegate life and death to a machine?,” she asked.  “Who will be responsible, then, for violations of human rights?   The company who developed the robot? The military? The engineers? The robots?”

“What would succeed humanitarian law?,” she asked. One might also question whether fully autonomous weapons could process the importance of human dignity. The role of faith leaders is crucial in addressing such questions, she noted.

Questions also arise as to whether fully autonomous weapons can discern the principles of a just war, or some of the laws of armed conflict (LOAC). For example, could such weapons distinguish between civilian populations and combatants and between civilian objects and military objectives, thus directing their operations only against military objectives? Could they consider that damage to civilians and their property cannot be excessive in relation to the military advantage gained?

In addition to the International Red Cross, there are many other opponents to fully autonomous weapons. The Campaign to Stop Killer Robots formed in 2013 and calls for a ban on the development, production, and use of fully autonomous weapons. The Campaign indicates this should be achieved through a new international treaty as well as through national laws and other measures.

“Once this Pandora’s box is opened, it will be hard to close.”

Researchers, scientists, and some companies who work with robotics and artificial intelligence also oppose fully autonomous weapons.  In August, 2017, 116 specialists (including founders of robotics and artificial intelligence companies) from 26 countries signed a letter calling for the UN to ban lethal autonomous weapons.

“We do not have long to act,” the letter said.  “Once this Pandora’s box is opened, it will be hard to close.”

In early June, Google said it would not seek another contract for work providing artificial intelligence related to a US Department of Defense Project. It will assist the military, but not in the area of artificial intelligence and weaponry. On June 7, Google released new Artificial Intelligence ethics principles.

In April, 82 countries in the UNCCW (United Nations Convention on Conventional Weapons) discussed a potential ban on lethal autonomous weapons (LAWS).  It was the fifth such gathering. They did not reach an agreement, but the meeting stimulated interest in continuing dialogue and working towards one. Almost every nation agreed maintaining human control over autonomous weapons is important, but defining “meaningful human control” (over weapons) was thorny.

A number of states called for a total ban on lethal autonomous weapons. As of April, five nations–France, the UK, the US, Israel, and Russia—rejected moving to negotiate new international law on fully autonomous weapons. (The US does have a policy, which expires in November, 2022, which allows the US Department of Defense to develop or use only fully autonomous systems that deliver non-lethal force, unless department officials waive the policy at a high level). We shall see what happens after the policy expires in 2022.

1 Comment

  1. Sr. Marjory Ann Baez

    Thank You!

FAMVIN

FREE
VIEW