Over 60 civil rights groups have argued that the potential EU terrorist content law, to be voted on in a month, puts fundamental rights at risk.
Rights groups argue the proposal to swiftly remove terrorist content online will have unintended consequences
A potential EU law that would force Google, Facebook and Twitter to remove terrorist content within an hour is being seen as a risk to fundamental rights, according to 61 civil rights groups.
"We urge the European Parliament to reject this proposal, as it poses serious threats to freedom of expression and opinion," the groups stated in a letter sent to members of European Parliament.
The civil rights groups include Human Rights Watch, Amnesty International, Civil Liberties Union for Europe and the European Federation of Journalists.
After a series of attacks in 2018 by radicalized lone wolf attackers, with online terrorist content seen as a contributing factor, the European Commission drafted the legislation to be voted on next month.
"The spread of radical ideologies and of terrorist guidance material accelerates through the use of online propaganda, with the use of social media often becoming an integral part of the attack itself," the European Commision said in its Counter-Terrorism Agenda in December last year.
"Regulation on addressing the dissemination of terrorist content online would allow Member States to ensure the swift removal of such content," it added.
The Commission defines online terrorist content as material inciting terrorism or aimed at recruiting or training terrorists.
"One key aspect to developing trustworthy AI applications is ensuring that the data used to train algorithms is relevant, verifiable, of good quality and available in high variety to minimize bias," the agenda said.
The letter from the rights groups', however, has claimed that removing this content is impossible using automated systems.
The pressure put on the online platforms to remove terrorist content within the hour will have unintended consequences where automated content moderation tools would be put to use, such as upload filters. The groups' said it would not be possible for humans to filter the material that quickly, adding that content that is not terrorist in nature would get snagged as well.
"It is impossible for automated tools to consistently differentiate activism, counter-speech, and satire," the groups said. "Increased automation will ultimately result in the removal of legal content about discriminatory treatment of minorities."
Not only could the law lead to misunderstanding through the use of content moderation tools, but it could also be misused due to a lack of judicial oversight, the letter said.
Member states would have to designate at their discretion competent authorities to implement the regulation's measures.
Though the proposal does state that authorities must be objective and non-discriminatory, the civil rights groups said they nevertheless believed that only courts or independent administrative authorities subject to judicial reviews should have a mandate to issue removal orders.
"This regulation could also empower authoritarians to stamp out criticism beyond their borders. It means leaders like [Hungary's] Viktor Orban could demand an online platform removes content hosted in another country because he doesn't like it," said Eva Simon, senior advocacy officer at the Civil Liberties Union.
Because of the risks of censoring information that poses no threat, the lack of judicial review and the potential for abuse of power, the letter said the proposal has no place in EU law.
Three months after reaching a political agreement with EU countries, the European Parliament is expected to vote on the legislation next month.