EU′s rights agency warns on AI threat to rights | News | DW | 14.12.2020

Visit the new DW website

Take a look at the beta version of We're not done yet! Your opinion can help us make it better.

  1. Inhalt
  2. Navigation
  3. Weitere Inhalte
  4. Metanavigation
  5. Suche
  6. Choose from 30 Languages


EU's rights agency warns on AI threat to rights

Fundamental human rights could be at risk if AI technologies are used without due caution, the EU's rights agency says. It said AI can lead to discriminatory biases and perversions of justice if safeguards are lacking.

 Symbolic picture of globe with programming code behind and in front of it

Artificial intelligence technologies are being employed more and more across the world

More attention should be paid to the possible negative effects on people's fundamental rights of technologies based on artificial intelligence, the EU's rights agency said in a report issued on Monday.

"AI is not infallible; it is made by people — and humans can make mistakes," said Michael O'Flaherty, director of the Fundamental Rights Agency (FRA), in comments cited on the agency's website.

"The EU needs to clarify how existing rules apply to AI. And organizations need to assess how their technologies can interfere with people's rights both in the development and use of AI," O'Flaherty said.

Neglected rights aspect

The FRA report, entitled "Getting the future right — Artificial intelligence and fundamental rights in the EU," identifies areas where it feels the bloc must create safeguards and mechanisms for holding businesses and public administrations accountable in their use of AI.

It points out the many sectors in which AI is now already widely used, including in decisions on who will receive social benefits, predicting criminality and risk of illness and creating targeted advertising.

The report says that much of the focus in developing AI has been on its "potential to support economic growth" while the aspect of its impact on fundamental rights has been rather neglected. 

Demonstration of facial-recognition technology with comparison of several faces with a target face

Facial recognition technology is one use of AI that has aroused considerable controversy

Call for accountability

It is possible that "people are blindly adopting new technologies without assessing their impact before actually using them," David Reichel, one of the experts behind the report, told the AFP news agency.

Reichel told AFP that even when data sets did not include information linked to gender or ethnic origin, there was still "a lot of information than can be linked to protected attributes."

One example used in the report is employing facial recognition technology for law enforcement. It says even small error rates could lead to many innocent people being falsely picked out if the technology were used in places where large numbers are scanned, such as airports or train stations. "A potential bias in error rates could then lead to disproportionately targeting certain groups in society," the report says.

The report calls for more funding into the "potentially discriminatory effects of AI" and for any future legislation on AI to "create effective safeguards." 

Above all, it says, the use of AI needs to be more transparent, more accountable and include the possibility of human review.

Watch video 06:40

The two faces of automatic facial recognition technology


DW recommends