Take a look at the beta version of dw.com. We're not done yet! Your opinion can help us make it better.
India's political parties have been told to eliminate hate speech and misinformation from their social media accounts. However, experts warn that enforcing an ethics code will be difficult. Murali Krishnan reports.
Last week, the Election Commission of India met with representatives of top social media companies, including Facebook, WhatsApp, Google, TikTok and Twitter, to discuss how to avoid abuse of social media platforms during national elections scheduled to take place in stages from April 11 to May 1.
The goal of the two-day long brainstorming session was to make sure political parties and their candidates won't be able to post unverified advertisements, hate speech, offensive memes and fake news on their accounts.
"We want social media companies to create a code of ethics and not allow their platforms to be misused," India's chief election commissioner, Sunil Arora, told DW, adding that harmful content would be flagged and abuse of social media would be punished.
Indian elections in the past have seen discourse degenerate into slander and widespread misinformation. This has at times led to social unrest and violence. Like elsewhere in the world, social media in India plays a major role in influencing voters.
With over 460 million internet users, India is the world's second largest online market, ranked only behind China. By 2021, there will be an estimated 635.8 million internet users in the country.
Recent studies by mobile companies show that nearly 80 percent of urban internet users, and 92 percent of rural users, use their smartphones to go online.
The messaging service WhatsApp currently has over 200 million monthly active users in India, and a similar number of Facebook users. Twitter has nearly 30 million users in India. Avoiding manipulation of these services in spreading misinformation poses a challenge for regulators.
Who decides what is objectionable?
"It is going to be tough. This all should have been thought through a long time ago and not when the elections are upon us," Pratik Sinha, the founder of Altnews, a fact-checking website, told DW.
"The rules of taking down content by the election panel are vague. Who decides what is objectionable?"
Others are skeptical over whether a code of ethics can tame fake news on WhatsApp, considering messages on the platform are untraceable.
"It takes very little time for a post to go viral before it is taken down. By that time, the post or a doctored video would have been shared, retweeted or commented on, and the damage has been done," digital expert Sunil Prabhakar, told DW.
Jency Jacob, managing editor of BOOM, a website that works in partnership with Facebook on verifying stories, said it was unclear how social media platforms would respond to requests from the election commission.
"Are platforms are going to take it at face value, or will they push back and say they don't know if something is completely false?" he said. "So many things are unclear."
Facebook and WhatsApp take action
Facebook said that it is planning to open an operations center in Delhi that will monitor election content on its platform. The office will coordinate with the company's other offices in Menlo Park, Dublin and Singapore to keep tabs on election-related fake news.
"With this constant engagement with the election commission, we will understand how we can ensure that the coming polls are safe from abuse and misinformation on the platform," said Shivnath Thukral, Facebook's public policy director for India and South Asia.
Facebook in India has also partnered with several media companies and fact-checkers across languages during the course of the election.
However, many social media experts believe that taking down objectionable content or blocking accounts is not going to be easy, as social media has many layers of plausible deniability. Many accounts are not "officially" connected with political parties, yet these accounts still help carry messages.
Given that WhatsApp is a preferred mode for many to spread fake news, the company is testing two new features that could help stop the spread of false information.
It is currently beta testing an in-app browser, and a reverse image search tool, which pop up in-chat, allowing people to search for where an image came from before they consider sharing it.
Read more: 'Selfie deaths' on the rise in India
Over the past several months, WhatsApp has made a series of changes, including labeling forwarded messages to inform users when they have received something that hasn't originated from their immediate contacts
"We are proactively working with the election committee and local partners for a safe election and that is our top priority. Expanding our education campaign to help people easily identify and stop malicious messages is another step towards improving the safety of our users," Abhijit Bose, head of WhatsApp India, said in a statement.
Read more: Is WhatsApp a threat to India's security?
Users do the dirty work
Some reports suggest that India's ruling Bharatiya Janata Party (BJP) party and its rival, the Congress (INC), have already started campaigns on WhatsApp to rally support and spread political messages among followers.
The BJP is thought to have around 200,000-300,000 WhatsApp groups, and INC has between 80,000 to 100,000 groups.
"The sheer number of online users has transformed political campaigning in India, with political parties and their affiliates of all stripes generating vast amounts of fake news. The code of ethics has come late," said Jignesh Tiwari, a social media analyst.