Big data is being used in political campaigning like never before. But for better or for worse? Mathematician Cathy O'Neil spoke to DW about how algorithms can be used as weapons that undermine democracy.
DW: You always loved math, and you built your career on it. But in your book "Weapons of Math Destruction," you argue that math, specifically big data, can be a weapon. How did you get to the point where you said, "I need to write this book?"
Cathy O'Neil: I do love math, and I thought that math was being abused. I wanted to write the book to, first of all, make the call so that people would know this is abuse and, second of all, defend the honor of mathematics itself.
Is big data something new?
It is not new to use data to predict things, but data used to be collected carefully and directly by statisticians or polls or employment places. They would poll people on their opinion on various specific matters. Nowadays, we don't have direct data. We are recycling data and using proxy data - things like how you click on websites, what you purchase, what you say on Twitter, who your friends are on Facebook - to infer things that we are interested in. The promise of big data is that we will be able to use all this proxy information to determine with increasing accuracy the things that we care about.
The German edition of O'Neil's book from Hanser publishing house. The title literally means "attack of the algorithms"
Big data supporters argue that it can help improve systems and make things more accurate - or say that the more information you have, the more beneficial outcomes you can achieve. But you seem to take the counterargument.
I would like to be clear that I am not saying that these things [algorithms] are totally inaccurate. Sometimes they are accurate, and that is a problem too! What I am trying to say is that these things are powerful tools, and they are blindly accepted by most people as fair. I want to make a very clear distinction between accurate and fair. That is where people get confused. Once they hear that an algorithm is accurate, they stop asking questions.
I worry about algorithms that are pretty accurate, but are still destructive and secret. For example, those used in political microtargeting. Politicians and campaigns in the US are getting better at figuring out who thinks what. They end up knowing more about the voters than the voters know about themselves. Moreover, the messages that the campaigns target these voters with are becoming less informative and more propagandist. And that, despite its accuracy, is still a destructive model because it undermines democracy.
You brought up microtargeting: using big data to target precise voters or voter groups with tailored messages. In your book, you compare this to a supermarket targeting its customers. Can you explain how that works?
I think we should understand that political campaigns put human beings into marketing silos. They gather groups of 30 to 40 people together, they determine the marketing silos as sold to them by the data warehouses or gleaned from these individuals' Facebook likes or such, and then they ask them political opinions. And what they have found is that for a given marketing silo, people's political opinions are relatively stable.
That means they don't have to know people's political opinions. They just need to know their silo. That is exactly how companies like supermarkets or Amazon or Walmart also think about their customers. They bucket them into different consumer types.
O'Neil also explains how algorithems work in health care, teacher hiring and firing, and job evaluations, among other things
Political campaigns have always had to sell themselves to voters. What makes big data's current use so dangerous that it threatens democracy?
I think there are two big differences today: scale and opacity. It used to be true that a politician could tell different things to different voters, but journalists would check whether the politician in question was saying different things to different people and write about it if they found conflicting political promises. That is impossible now because the different messages are going to Facebook, and a given journalist only has his or her own profile. They don't get to see the kinds of ads that other people are seeing who have different profiles.
To be able to send out highly targeted ads, you must have personal data. In the US, this is available and can be used. But Germany has much strong stronger data protection laws - it is illegal to collect, process, and analyze personal data without stating how it is going to be used, who will use it, and then obtaining explicit consent. Does this provide some protection or hinder such microtargeting from taking place in German political campaigns?
I would say yes and no. Yes, in that what we have had in the US is much more extreme than what I think is happening in Germany. For example, we have email lists of individuals who have been targeted as likely Democratic voters and donors. These lists are passed around, and they are very, very valuable. Moreover, Facebook will allow advertisers to advertise to specific email addresses.
Germany's stronger data protection laws can't fully prevent microtargting; however, the digital strategy may not be as aggressively used as in the 2016 US presidential campaign
I don't think that kind of thing can happen in Germany. However, I do think German advertisers are allowed to target by interest and, if so, that means they can do exactly what I described before: host focus groups to figure out which interests are correlated to which political beliefs.
They are not going to send their political ads out in email lists or even to a known list of people, but to people with specific interests that they have surmised are correlated to a certain type of political belief. There is nothing personal about this. The generic connection between marketing silos and political beliefs will be true in Germany as it is in the US. The question is how well-honed this understanding for German consumers is.
Big data's usage is not coming to an end any time soon. What consequences do you foresee if it keeps being used as a weapon?
I foresee more propaganda that directly challenges the concept of an informed citizenry that votes after having thoughtfully considered all options. There is no thoughtfulness when you don't get real information, and there is no real information in most of these ads. I don't see any way for that to end, unless we really start changing the rules.
How can we change the rules to improve the situation?
I think a step in the right direction would be to pressure Facebook to have a place where journalists or any interested citizen could go to see all the political ads that are being shown on the platform - in fact, all ads, because some of the political ads don't look intrinsically political. That would go a long way.
Do think we can step back from the position we have already arrived at - this social and political schism?
I think we have to. I think the consequences are becoming more and more obvious. I think it is an existential threat in a certain sense.
An existential threat to whom?
To the concept of evidence, to the concept of a shared reality.
Cathy O'Neil is the author of "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy." The book is released in German ("Angriff der Algorithmen") on August 21 by the publisher Hanser. O'Neil recently founded a company that offers clients an algorithmic auditing service and would like work towards understanding what accountability with algorithms could look like. O'Neil's blogs at mathbabe.org.