US military adopts ′ethical′ AI guidelines | News | DW | 25.02.2020
  1. Inhalt
  2. Navigation
  3. Weitere Inhalte
  4. Metanavigation
  5. Suche
  6. Choose from 30 Languages
Advertisement

News

US military adopts 'ethical' AI guidelines

The guidelines set out ways to ensure soldiers conducting AI-enabled warfare "exercise appropriate levels of judgment and care." But some observers believe the principles may just be "a bit of an ethics-washing project."

The Pentagon announced Monday it has adopted "ethical principles" for the use of artificial intelligence (AI) in warfare.

It comes as the US military hopes to speed up the development of AI in conflict while gaining public trust amid a series of controversies concerning the constellation of technologies.

AI is an enabling technology. As such, robots, drones and unmanned vehicles are examples of technologies that at times make use of AI on the battlefield.

Read more: European Union unveils plans to regulate AI

What are the ethical principles?

The new principles call for people to "exercise appropriate levels of judgment and care" when deploying and using AI systems. There also needs to be "explicit, well-defined uses" for AI technology, the guidelines stated.

Decisions made by automated systems should also be "traceable" and "governable." This means the military will have "the ability to disengage or deactivate deployed systems that demonstrate unintended behavior."

Prior to these principles, the US military guidelines only stipulated that a human had to be involved in all AI military decision making — a measure known as human in the loop.

Read more: Can AI be free of bias?

Watch video 02:41

Facial recognition reveals AI's darker sides

'Ethics-washing project'

However, not everyone believes the principles will do what they suggest.

"I worry that the principles are a bit of an ethics-washing project," Lucy Suchman, professor of anthropology of science and technology at Lancaster University, told the Associated Press. "The word 'appropriate' is open to a lot of interpretations."

Other observers have suggested that the principles are aimed at building confidence with the US tech industry. 

Google in 2018 chose not to renew a Pentagon contract known as "Project Maven" under pressure from employees. The project used machine learning to distinguish people and objects in drone videos.

The principles were the outcome of 15 months of talks, together with technology giants and universities. The consultations were led by Eric Schmidt, the former executive chairman of Google.

kmm/ls (AP, AFP)

Each evening, DW's editors send out a selection of the day's hard news and quality feature journalism. You can sign up to receive it directly here.

DW recommends

Audios and videos on the topic