US cybersecurity firm Cloudflare on Monday said it would withdraw its services from the online forum 8chan after the main suspect in the deadly El Paso shooting posted a white nationalist manifesto on the controversial website.
"Based on evidence we've seen, it appears that he posted a screed to the site immediately before beginning his terrifying attack on the El Paso Walmart, killing 20 people," Cloudflare CEO Matthew Prince said in a blog post.
8chan administrators had "voluntarily taken it down" while they shifted services, according to founder and former administrator Fredrick Brennan, who has disavowed the site.
Cloudflare's Prince cited other cases in which hate-filled content was posted to the online forum in the run-up to politically motivated violence, such as the attacks on mosques in Christchurch and a synagogue in Poway, California.
He said his company has notified 8chan that they would no longer be covered by its services, which includes protection from DDOS attacks. "They have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths," said Prince.
Read more: America's weekend of terror
'Confronting white supremacy'
The 21-year-old man suspected of killing 20 people at a Walmart in Texas is believed to have posted a patchwork manifesto on 8chan, tying together elements of right-wing extremist conspiracy theories.
The suspect praised the Christchurch shooter in the manifesto and cited "the Hispanic invasion of Texas" as the key motivator behind the attack. The FBI launched an investigation into the shooting as a case of both domestic terrorism and a hate crime.
In the wake of the Poway synagogue shooting in California, another example in which the perpetrator posted content to 8chan, FBI officials told lawmakers that the Bureau is taking further action to investigate such cases during a Congressional hearing on "confronting white supremacy."
"Violent extremists are increasingly using social media for the distribution of propaganda, recruitment, target selection and incitement to violence," the officials said. "These actors tend to be radicalized online and target minorities and soft targets using easily accessible weapons."
But 8chan and Twitter aren't the only online services used to promote ethnonationalist extremism. The US-based Southern Poverty Law Center (SPLC) has identified other platforms, such as Telegram and Gab.
"White nationalists have used fringe platforms like 8chan and the messaging app Telegram to call for terror attacks like the one in El Paso and to praise those who commit mass shootings in the name of their ideology as 'saints,'" the SPLC said in a post on Sunday.
Gab, touted as a right-wing alternative to Twitter, was also used by the perpetrator of the 2018 Pittsburgh synagogue shooting, in which 11 people were killed during a baby-naming ceremony. Web hosting company GoDaddy and PayPal, along with other online companies, barred Gab from using their services in the wake of the attack for "explicitly allowing the perpetuation of hate, violence or discriminatory intolerance."
But some critics believe governments should take a more proactive role in stemming online hate speech and the proliferation of ethnonationalist propaganda, instead of relying on commercial discretion.
Europe 'has taken a lead'
Cloudflare chief executive Prince pointed to European efforts to ascribe responsibility for removing hate speech not only on service providers but also on content hosts, especially those optimized for sharing content.
"Europe, for example, has taken a lead in this area," Prince said. "As we've seen governments there attempt to address hate and terror content online, there is recognition that different obligations should be placed on companies that organize and promote content — like Facebook and YouTube — rather than those that are mere conduits for that content."
Germany, in particular, has spearheaded efforts to combat online hate crime and incitement to violence. It has enacted laws that force social media platforms, such as Facebook and Twitter, to remove hate speech within 24 hours or face fines up to €50 million ($56 million).
Earlier this month, German authorities fined Facebook €2 million for underreporting the number of hate speech complaints it received, saying it failed to comply with biannual reporting on its progress in the fight against extremism.