The US nonprofit Ranking Digital Rights looks at tech companies’ disclosure of policies and practices affecting freedom of expression and privacy. Director Rebecca MacKinnon talks about these companies’ responsibility.
The latest report by Ranking Digital Rights, its 2018 Corporate Accountability Index, evaluated 22 of the world’s most powerful Internet, mobile and telecommunications companies on the amount of information they disclose on their commitments and policies affecting freedom of expression, security and privacy.
While the majority of the companies improved their scores compared to 2017, there are still major shortcomings with regard to expression, security, privacy and governance. According to Ranking Digital Rights:
Rebecca MacKinnon, director of Ranking Digital Rights, has long been active in the fields of freedom of expression and privacy and is a founding member of the Global Voices Online citizen media network.
#mediadev: Rebecca MacKinnon, the Ranking Digital Rights Report was published in April. The methodology hasn’t been changed since 2017. So comparisons between the two years are possible. What kind of results do you see from the project?
MacKinnon: We have definitely seen companies begin to change. Just in the past year, between 2017 and 2018, 17 of the 22 companies we evaluated have made changes. And many of them have been in direct response to the evaluation we have made last year. Other changes have been indirectly related to our evaluations. For example, the first year we ranked the companies, we had an indicator that has been constant throughout the ranking – looking if companies publish any data about the volume and subject matter of content that they are removing when they enforce their terms of service.
In the first year, 2015, it was zero for all. Companies across the board were revealing no information about the amount of content being removed. This year we saw some transparency, some disclosure of numbers in our index. And just last week, after we finished the research and the scoring, YouTube published very comprehensive data about the volume and nature of content being taken off their video platform. So I imagine that will boost their score even more in the next round.
So just by publishing certain information, companies boosted their score. Is there any other low-hanging fruit companies could go for right away in order to improve their ranking?
There’s no reason for companies not to be clearer about handling user data in many cases. Just tell us what you’re doing! The reason why they’re not so clear is that people won’t be pleased with what they learn and so that might not be so good for business. But then I think that really raises a question: if you’re doing things you can’t tell your users, maybe you need to fix your business. That’s an area that is a little more difficult to disclose, yet at the same time public interest in that disclosure is very high. There are other things. In some markets, companies don’t publish their privacy policies at all. Or you can’t see them unless you are a paying subscriber. That’s very mystifying. Why don’t they just publish those policies? That would certainly be a very low-hanging fruit.
Where do you see other positive developments?
We’ve also seen Facebook make changes that map quite directly to disclosures we want to see about use of data and about terms-of-service enforcement. Some of those changes are very positive. We have also seen some European telecommunications companies make significant changes to their governance policies around expression and privacy. They’re doing much more due diligence and impact assessment and demonstrating that they’re holding their executives accountable and starting to be more transparent about government demands to remove content, shutdown networks and so on.
So we are seeing some changes, definitely. But still most companies, if you grade them in academic terms, get a failing grade. But that failing grade is improving somewhat. I would also say that if you look at indexes on press freedom, Internet freedom or freedom of speech around the world, all these indexes are going down. And governments don’t seem to care very much about what evaluation they receive. They are not directly making changes. Companies are much more responsive and much more concerned about a loss of their reputation. So, ironically, they are actually making efforts to improve that you can see more quickly. Not to say that companies are doing great, but they are more agile than on the government side.
Not only are press freedom indexes going down, in relation to this, civil society spaces are shrinking in many regions of the world. Where does the specific responsibility of tech companies lie?
This is where tech companies really do have a responsibility because increasingly they are the conduit, the intermediates between governments and people who use the Internet – which is increasingly most people in many parts of the world. Censorship and surveillance happens through the companies. One problem is that companies can’t stop governments from engaging in bad practices. According to human rights law and standards, governments have a primary duty to protect human rights. And they are failing around the world.
Companies also have a responsibility to respect human rights. They need to demonstrate maximum effort to respect the rights of their user in any area they have control over. For example, if a law requires a company to hand over user information and a ministry or some public security bureau asks for this kind of information, the companies can’t break the law without their employees going to jail, if they are operating in that country. But they can be transparent with their users. They can carry out due diligence about whether they should even be in that country. What is the human rights impact, positive and negative?
Ultimately, unless governments live up to their responsibilities on human rights – which they are failing to do – it’s going to be very difficult to reverse the global trend on the strength of corporate responsibility alone. However, there is much that companies can do to mitigate harm, to limit the impact of poor law, bad law and pernicious law. Of course, there are other practices that companies engage in that need to be regulated more, like Facebook’s very lax data practices which need to be regulated with better data protection law. So there are some ways that companies are also abusing users’ rights directly that governments can take a stand on. It’s a very complicated situation.
Do you expect the current European data protection legislation, the GDPR, to have an effect beyond the region?
Any platform that has global users is making changes in response to the GDPR. Our own indicator, our own evaluation, does not directly link to the GDPR, but many of the disclosures that we are looking for are very similar to what the GDPR is asking. We are optimistic that we’ll see improvement as a result. I’ll be interested in seeing see just how much. It’s also unclear how exactly GDPR is going to be enforced and which parts will be enforced more than others. The ultimate impact of GDPR is going to take a few years to play out. But I do imagine in the coming 12 months we’ll see quite a bit of change. Hopefully that will also incentivize business models that are more respectful of users’ rights.
Security is one of the major pillars of the report – an area where investors have been more sensitive than in others. What is the most striking finding in the report when it comes to security?
It is very interesting with security in that, for example with data breaches, very few companies disclose to users any information about their policy on handling data breaches. Now they talk to regulators about this, obviously, because laws in many countries, including across Europe, require communication with regulators about data breaches. But there are not a lot of requirements to communicate with the public or with users or the requirements are mixed. So companies are not actually disclosing their overall policies about handling data breaches, which is somewhat surprising.
Interestingly, on some of the security indicators, the Russian company Yandex does quite well, although it doesn’t do so well in many other parts of the index. But on security it clearly wants to make a name for itself. And there is no legal barrier to being strong on security. We are also seeing with Chinese companies that they want to compete with each other on demonstrating security for their users.
Straying a bit from the current index, there has been a lot of talk recently about algorithmic decision-making. What are the human rights-related risks regarding a lack of algorithmic transparency?
The ultimate issue here is that when our information environment is manipulated in any way – whether that manipulation is being done by humans or by computer programs or by robots – I need to know that and I need to know who is responsible for it. Because if my information environment is being manipulated and I don’t know it, that’s insidious. How can you have a democratic society in which the government is held accountable if information is being manipulated and no one even knows? When abuses take place around the manipulation or policing of information and people don’t know who did it or who to hold accountable, how can it be changed. If you can’t have a public discussion about that, you don’t have freedom of expression.
So it’s very important that we find a way to determine what specifically we want companies to disclose about their use of algorithms so people can understand that their information environment is being manipulated. But it’s not so easy to figure out exactly what the questions are we need to ask and that we should grade companies on.
Over all, how is the tech sector doing when it comes to human rights in comparison to other sectors?
You know, all sectors have unique human rights problems. But there are some sectors that have been dealing with human rights problems for a long time. And of course they’ve made a lot of really big mistakes. But what’s funny is that if you talk to the leading companies of, for instance, the mining industry or the manufacturing industry or the food and beverage industry, when you talk to Unilever or even Coca Cola, they have human rights policies. They all conduct risk assessments across various sectors.
And then you go to Silicon Valley. Google, Microsoft and some others have signed on to initiatives like the Global Network Initiative that reference international human rights standards. But if you get beyond this handful of companies, if you go to the startups, the gatherings or technology conferences on digital marketing and advertising, and you talk about the Universal Declaration of Human Rights, people don’t know what you’re going on about.
Companies in the manufacturing sector tend to be a little more conversant with these things because they’ve been forced to be. But what’s funny is the extent to which many technology companies think they don’t have anything to learn from other sectors. And this is too bad, because if you don’t learn from people’s history, you might repeat that history unnecessarily.
Interview: Alexander Matschke