Where Facebook lets fake accounts fly
In September, Sophie Zhang was fired from her job as a data scientist at Facebook, where she had worked for almost three years on the “fake engagement team” set up to track abuses on the platform. In her farewell memo, which was obtained by Buzzfeed News, Zhang detailed how she had detected extensive networks of inauthentic activity on the social network, and how her superiors had declined to intervene. In Honduras, Azerbaijan, India, Ukraine, Bolivia, Ecuador, Spain, and more, she detected coordinated campaigns to support political candidates and manipulate public information, and found herself single-handedly intervening, she said, in the national politics of these countries. In particular she pointed to a loophole in Facebook’s current policies. Although the company bans users from setting up multiple accounts, the same rules do not apply for Facebook Pages, which can be used to boost political candidates and allow their owners to comment and like posts. “I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count,” she wrote.
In early April, Zhang came forward in an extensive interview with The Guardian in which she describes how she petitioned her superiors to take action and was repeatedly told that certain nations, like Honduras and Azerbaijan, were not high priority, or that the company simply did not have enough resources to address the scale of the problem. Zhang recently spoke with The Ballot about her experience at Facebook, about how inauthenticity is detected, and how this global problem might be remedied. Facebook did not respond to request for comment.
~~~
The Ballot: Some of the worst violations you observed occurred on Facebook networks in Honduras in Azerbaijan. During your Reddit AMA last week, you wrote that, “[I]f you stuck a gun to my head and made me pick, I'd say Azerbaijan was more blatant.” Why is that?
Sophie Zhang: Azerbaijan was larger and more sophisticated, and had a lot more actual, real effort involved, a lot more detail. This was not an easy operation: This is maybe thousands, tens of thousands of people motivated by the government in some form to go to work everyday [to post things on Facebook.] People who get up at nine in the morning, stop for a regular lunch break at 1 pm, or perhaps a tea break. They would only work on weekdays, they stopped during weekends and during holidays, such as New Year's. They also stopped when non-essential government employees were temporarily furloughed due to Covid-19 last year.
Every time an opposition figure in Azerbaijan posted something on Facebook, they would immediately get deluged by hundreds or thousands of comments about why they were terrible person, why they were a traitor, why the president of Azerbaijan, Ilham Aliyev, is great and glorious and will lead Azerbaijan to prosperity, and why Azerbaijan doesn't need Western democracy. I'm paraphrasing from memory, of course. You can still go and find them, I'm sure. That's what I meant [when I said] it was the worst. There was clearly so much actual effort being expended for this specific purpose, and they hadn't even bothered to hide [it]. Official accounts for the YAP [New Azerbaijan Party], which is the ruling political party in Azerbaijan, were controlling some of this activity.
The similarities between Azerbaijan and Honduras were direct governmental involvement, the specific methods they used to do this activity—using fake pages, pretending to be real people.
TB: The loophole you have described was the use of Facebook Pages to create fake accounts, correct? Are you aware of whether this loophole is still open, if people can still use the same tactics to create inauthentic accounts?
SZ: As far as I'm aware it's still open. Just the other day, I checked to see that it was still ongoing in Azerbaijan and in certain other countries. To be clear, Facebook does have official rules against this, but those rules are not enforced right now. The analogy I would give is that many nations have laws against jaywalking, but that doesn't mean that the police will go out and arrest you if they see you in the street and not at a crosswalk. It's officially forbidden under Facebook, but there's no one going out and finding people who are doing this. So that's the specific loophole that's still open. It was used in Honduras, it was used in Azerbaijan, it was used in a large number of other countries throughout the world, and as far as I can tell, it's still going on.
TB: You have spoken out about how Facebook makes decisions about which countries to prioritize [in their fight against fake activity]. In your Guardian interview, you said there was an explicit focus on the US and Western Europe, that the operating assumption was that “if a case does not receive media attention, it poses no societal risk,” and that this led the company to ignore violations in smaller, non-Western countries. When, if ever, did this change?
SZ: So to be clear, Facebook does prioritize other countries, such as India, which is a target growth market for Facebook, of course. This is a complicated question, and I'm going to do my best to break it down into parts to answer one by one.
Ultimately, Facebook is a company. Its goal is not to fix democracy or to make the world a better place. Its goal is to make money....
Facebook will tell you that they do act in smaller countries, in nations that aren't in the West or India, etc. But the issue is that most of these are in reaction to outside reports. Secondly, Facebook does have an official country priority list. There used to be a prioritization based on upcoming elections. For instance, Brazil and the U.S. were considered highest priority in 2018, and at the start of 2019 India and the European Parliament were considered the highest priority for the first half of that year. Of course, the issue with this prioritization is that not all countries have elections or are democratic. I mean, Azerbaijan is so democratic that in 2013, it accidentally released partial election results the day before the actual election.
In 2019, Facebook set up what's called an at-risk country prioritization list. Certain countries are more important than others, judging by a combination of size, scale, stability, what's going on there, etc. For instance, India is highest priority on that list and I think the United States is, too. But it's not perfect, because Azerbaijan was at the second priority level on that list, and it still didn't get acted upon. So sometimes it seemed like more of a suggestion than a rule.
How do I see PR considerations and country prioritizations coming into it? I didn't see it directly. It was usually like the elephant in the room that people did not like to talk about, and it was often indirect. The cases where it was explicitly mentioned were rare.…
People would almost never come out directly and say, 'India is more important than Azerbaijan, sorry Azerbaijan.' And even in cases where countries are given priority, often the priority is not what I think the people in those nations would prefer. You might have seen the report in The Guardian put out on my work in India. What happened there was that when I found networks of fake accounts that were supporting Indian politicians from multiple political parties across the political spectrum, Facebook did prioritize them. After I filed the task and escalated it internally, they acted within a few weeks. However, in the case of one network, I discovered last minute that it [the inauthentic activity] was actually tied to a politician, a sitting member of the Indian parliament. As soon as that happened, even though we had already gotten permission to act, there was suddenly radio silence. I asked repeatedly, often when we were talking about other matters, so I knew they were paying attention. But it was like talking to a wall. I never got a response.
TB: I appreciate the distinctions you have drawn between inauthentic activity versus misinformation. In Ukraine, you identified networks of “inauthentic scripted activity” supporting two former prime ministers. How does that kind of activity differ from other kinds of inauthentic activity?
SZ: The word 'bot' is often used interchangeably, these days, to talk about both scripted accounts with no real people behind them—that just have a script working on them—and to refer to real people who are sitting in a room and acting, perhaps, on behalf of a government. These are very different categories. Robots and scripts are not as smart as actual people; that is why you still have a job. It's why I had a job, though I don't anymore.
It's easy to get large numbers with scripts, but it's usually not very effective. In comparison, using real people can get you much more effective results, but it takes a lot more effort, because you're paying real people to do something for you and people's time is valuable.
TB: You mentioned in your AMA that in India, for instance, there were times when accounts were labeled as inauthentic, when really they belonged to new internet users who hadn't changed the default settings on their account.
SZ: That's an example of what we could call a false positive. People have ideas of things they could look for to find inauthentic accounts, but you're not always going to be right. Some accounts can look very suspicious, maybe an account with a blank photo and a generic name and no email address, and they don't do anything on Facebook except friending completely random people. This may look very unusual, but it may turn out that they are just a rural person who doesn't know anything about the internet and behaves oddly. I don't want to go into the details...But it's true that unfortunately mistakes happen in this sort of work.
TB: In the AMA, you referred to inauthenticity as something “you can be very Manichean” about, describing yourself as coming into a conversation and saying, "I know what is bad. This is bad! Here! Let's get rid of it", such that others could not dispute it. But was identifying inauthenticity always so black-and-white? How did you determine what met the threshold, and what did not?
SZ: I want to make a distinction between two different types of clarity. The first is the question, 'Is something bad or not?' That's the clarity I was referring to. Misinformation is a controversial topic. There are smart and influential people who are saying, 'Well, people should have the right to express their views, even if they're wrong. We have freedom of speech, etc.' But no one—at least no one prominent that I am familiar with—is saying that dictatorships should be able to run tens of thousands of fake accounts pretending to be real citizens on social media. I don't think anyone would defend that. That's what I mean by black-and-white.
Were there cases in which sometimes it was not clear if something was real or fake? Certainly. Because of that, I'm focusing on cases where it was very clear, and beyond that, I'm trying to focus on cases in which it was very clear who was responsible. But there were certainly cases in which it was less clear whether it was real or fake. In cases where there was less clarity, it was harder to act.
TB: After an election took place, did Facebook just move on to the next one? Was there any kind of evaluation after the fact?
SZ: The election focus was the old way of doing things. I joined Facebook in January 2018; that was the way things worked when I joined the company. It changed in the middle of 2019 to focus on specific countries that were considered to be at risk regardless of elections. So for instance, Myanmar does not have an upcoming election, but it is considered a priority for extraordinarily obvious reasons.
TB: Was there any interaction at all between the company and people on the ground who were actually involved in safeguarding elections?
SZ: If you are referring to international observers, governmental agencies, NGOs, activists, that sort of thing, yes. I think the company took advice and tips from them often. That's where a good chunk of leads to investigate came from.
TB: What kind of policy solutions, if any, do you see that might remedy these kinds of speech and authenticity problems?
SZ: I think it's clear that some collective solution is necessary, that the United States, India, the United Kingdom, France, and Australia shouldn't be independently making their own decisions. If nothing else, then because if you act independently, it's easier for an individual social media company to defy you. That's what happened in Australia quite recently [where Facebook banned access to news pages after the government proposed legislation that would regulate the company.]. Some level of collective action is needed, but I don't think the United Nations is the correct pathway.
For the specific area of inauthenticity—and this is just an idea, of course, I am not a policy expert, I am not a politician, I don't know what good policy solutions would look like—is that countries such as the United States, France, etc. could have their own social media experts set up tests on inauthenticity operations on social media platforms, with the knowledge of those companies, but without sharing specific details, and see how many operations might come up.
This would be a way of independently testing how good each social media company is, and incentivizing them to be better at catching [inauthentic networks]. A lot of the problem right now is that people only know what the social media companies tell them. If you talk to Facebook after this call, they'll give you a blanket statement about how they do so many takedowns and do a lot of work, and it may be completely true, but it will not answer any of the specific points.
This interview was conducted by Linda Kinstler. It has been edited and condensed.