1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

On 'Safer Internet Day,' the focus is on violence

Gabriel BorrudFebruary 10, 2015

Around the world on Tuesday, people are gathering to mark "Safer Internet Day," an initiative focused on making the web a safe place for kids. DW takes a look at one specific area of focus: the spread of violent content.

https://p.dw.com/p/1EYh8
Masked man. (Photo: REUTERS TV/Files)
Image: Reuters

"Let's create a better internet together," is the motto of a worldwide initiative that has designated the second Tuesday in February "Safer Internet Day." It's part of global efforts to promote "safer and more responsible use of online technology and mobile phones, especially among children and young people across the world."

In essence, a "safer" Internet is a better Internet, and for the past 11 years the global initiative that now comprises 100 countries and a number of industry giants has dedicated itself to achieving that goal.

"One of the interesting things about 'Safer Internet Day' is the number of competing tech companies that are putting aside their differences for a day to focus on safety," said Larry Magid, co-director of non-profit site connectsafely.org, of a list of sponsors that includes Microsoft, Google, Facebook and Twitter , as well as a host of non-profits and advocacy groups.

The specific focus this year concerns the sharing of violent content online, in particular on social network platforms, and the main gathering in the United States will be held at Facebook headquarters in the heart of Silicon Valley. It comes amid widespread media attention to the online distribution of "Islamic State" (IS) beheading videos, and in response to that attention - as well as calls for Facebook to block such content - both Facebook and Twitter have made changes to their policy with regard to the posting and sharing of explicitly violent content.

Two teenage girls talking. (Photo: Sylvie Bouchard)
According to the CDC, the sight alone of violent content can have adverse effects on young peopleImage: Sylvie Bouchard - Fotolia.com

Facebook, Twitter tweak policy

In December, the social network behemoth with over 1.3 billion users announced it would blanket such violent videos with a warning screen - asking users if they "are sure they want to see this," and reminding them that "graphic videos can shock, offend and upset." For users under the age of 18, videos with the warning cover cannot be accessed.

Facebook has come under fire in the past for allowing the posting and sharing of explicit content, often rejecting calls for stricter guidelines by saying its users were in control of what appears on their profiles. Following the initial distribution of the IS videos this summer, however, those calls have grown louder - and not just for Facebook.

Likewise in December, Twitter announced it was "taking steps to simplify" the process of reporting online abuse.

"We're making it easier for users to flag Tweets for review," Shreyas Doshi, who oversees user safety at Twitter, said on his blog. "And to enable faster response times, we've made the first of several behind-the-scenes improvements to the tools and processes that help us review reported Tweets and accounts."

Though the technology to automatically block videos that are known to contain graphic images exists, both Facebook and Twitter have shown reticence to such installations. A Facebook statement said the technology was still unrefined, and that it wouldn't be able to differentiate, for instance, graphic content embedded within a serious article from that posted and shared to "glorify" violence.

The great majority of "content removal is reported to us by users on our site," said a spokesperson for Twitter, noting one particular exception.

"We use PhotoDNA to identify and report child sexual exploitation images," the statement read, saying the dissemination of violent content was on a different level.

Over-the-shoulder view of a computer screen. (Photo: EPA/DGP)
Automatic blocks - and international standards - exist for child pornographyImage: picture-alliance/dpa

A legal matter?

"Once a child has seen something, it can no longer be unseen," Will Gardner, chief executive of the UK charity Childnet International, told DW.

"The user is a like a moderator. He or she tells the platform about abuse or offensive content. This doesn't protect from accidental exposure, which is why we welcome the changes being made at Twitter and Facebook," said Gardner, who is incidentally on Facebook's advisory council.

Amid ever-louder calls by advocacy groups for social media platforms to ban violent content outright, however, the lack of any legal framework for such bans remains.

"For children, the Internet has become unavoidable," said Petra Grimm, a professor of media ethics at Stuttgart Media University. "We need laws that regulate its use when it comes to the dissemination of violent content."

Author of a comprehensive study on cyber violence, "Gewalt im Web 2.0" (Violence on the Web 2.0), Grimm has called for the establishment of international legal standards with regard to the "online dissemination of violent content and violence committed via the Internet," comparable to those already in place for the offense of child pornography.

"The Internet has no borders," Grimm concluded. "To combat the dissemination of violent content effectively, we need laws without borders."