Don't Trust Your Eyes, Don't Trust Your Ears
One challenge for voters and politicians are deepfakes, altered video where a leader or politician is made to say things that he or she did not.
Where deepfakes have not yet been known to directly influence an election, the mere existence of the technology has already cast presidential statements into doubt. Two years ago in the West-central African country of Gabon, suspicions about a video of the president cast such doubt that the military led a failed coup.
In the Indian legislative elections, a video from the President Modi’s BJP party was manipulated to show the politician speaking in a Hindi dialect rather than English--a way of “approaching the target audience,” according to a party rep.
To learn more about this technology and its possible implications, we spoke to Julie Owono, the executive director of Paris-based digital rights organization Internet Sans Frontières.
This interview has been condensed and edited.
Two years ago, the President of Gabon appeared in a video that was thought to be a deepfake. Can you explain what happened?
In 2018, Ali Bongo, the president of Gabon, suffered from a stroke. But this information was hidden.
The public knew only that he had suffered a stroke. But nobody knew whether he was alive or whether or not he had recovered. There was a lot of secrecy. The government didn't want to discuss the matter.
The point was very important because if he were to be ill, procedures needed to be put in place in order to assess who was in power, and if the president of the National Assembly needed to take over and elections organized.
But instead of providing health records, the government did nothing.
In December 2018, Bongo reappeared in a public address. But the public address did not convince people that it was truly him.
A week later, there was an attempted coup, led by the military from the presidential guard, so people who were familiar with the president. During the coup attempt they claimed that they did so because they didn’t recognize the person that they saw on TV as the president, and therefore they did not trust what was happening.
The coup failed and things went poorly. The military who plotted the coup were arrested and then killed.
What happened next?
[Many] continued to ask questions about the video: for example, why does the president never blink? Why does his mouth seem so detached from the rest of his face?
We received many of these questions, because we have been campaigning a lot in the region.
We brought those doubts to Facebook—the video had aired on Facebook. We sent the concerns to the platform, telling them that we did not 100 percent know whether this was fake, but that given the implications, it would be important for the platform to be aware and decide how such information [be broadcast.]
Experts now say that it is not possible to say that it is a deepfake, but at the same time they do not rule out the possibility that it may be one.
But our position on that is to say, the very fact that there was a suspicion should have triggered some branches of the platform, which didn’t happen. To this day, we don't have any official reaction from Facebook.
Why is this kind of reaction important?
It would have been important to get a reaction because that video did have a political impact.
We did an investigation last year which proves that every critic, every dissenting voice which tried to question the health of the president, and the question whether or not he was really the one being shown and whether or not he was really able or fit to rule, was either suspended, if a media outlet, or arrested.
For those who don’t know: what is a deepfake?
A deepfake is either a video or a picture—but usually a video—which has been doctored to have someone saying something that is actually not in the original footage.
The most famous example is a video of Barack Obama doctored to say that Trump is an idiot.
In the case of the Gabon president, we considered it was a potential deepfake because it seemed possible that the president was speaking, but not in the way presented—he was not stuttering when people who suffer from strokes often have remnants of the attack, for example.
You have said in the past the deepfakes are more common in African politics than they have been in American politics. Can you describe a little bit how they tend to function--who tends to be making them and how they circulate?
We're seeing a diversity of uses of deepfakes. On the one hand they are used by regimes to further advance accounts that may not be true, but at the same time, they can also be used by activists who want to put out a message.
In Cameroon, for example, activists published deep fake footage of the French ambassador to Cameroon saying things like “Cameroon belongs to France.”
It is obviously a fake, and even the activists sharing it did so saying, “This is a deepfake, France finally recognizes that Cameroon is a colony.”
So for us it was an interesting example because we have been advocating for closer attention to the impacts of deepfakes on the governance of the country. At the same time we are questioning the extent to which they can actually be used in the frame of freedom of expression—in this case as satire to reveal a very problematic colonial relationship between countries.
Whose responsibility are these fakes? The people circulating them? Or platforms like Facebook?
Increasingly, whether Facebook likes it or not and whether Zuckerberg likes it or not, platforms have responsibility.
Especially in the global South—you cannot want to scale your product and have your product in a market that is obviously profitable, probably the most profitable in the years to come, without accepting the political responsibility that comes with that.
We don’t think this should be done arbitrarily. But there are international human rights texts that do provide very good guidance.
For instance, in the case of the debate that we're having internally at the moment [about deepfakes used by activists]: What do international texts say about satire? They protect the right to caricature, especially towards public figures.
So, we think that by looking to international guidance we can have an idea of how these types of content could be treated by platforms so they don't have to decide based on internal rules which are not visible to the general public and cannot be affected by the general public.
Hopefully, when it comes to be deepfakes, there will be some transparency because our fear is that like anything else, [practices put in place] in the name of fighting disinformation will actively fight genuine, authentic voices and important voices, especially in places where the debate is not open.
Yes, one of the things that we've seen is that in the wake of conversations about fake news, is that countries around the world have passed legislation against fake news. These end up just targeting journalists or opposition members, and in fact, putting a damper on free speech.
This has been a trend.
That binary approach--of “fake news” versus not fake news--when it comes to disinformation is very disturbing for us, because we know it's limited, and it can overshadow the good things that could have come with [more scrutiny].
How might organizations approach disinformation and deepfakes?
One thing that we saw in the case of Gabon is that it is important for both platforms and information networks to work with activists located in those countries.
This is especially important in cases where activists suspect that the government has published a deepfake, but they cannot prove it because they don’t have the necessary capability. It’s important for researchers in Northern universities to work and listen to what activists are saying.
There should also be more training on how to identify deepfakes, but again it’s important to have collaboration to bring data and science to what they are saying. It protects activists.
This is something that we’ve done in another area: the issue of internet shutdowns.
We will take the risk of saying, we suspect there's a shutdown. We have tools which can look at the state of the network globally and see whether or not there is a problem in a country. By having this, it prevents [activists] from being accused of publishing false information.
So more science, and more sharing.
Julie Owono is a lawyer and the executive director of Paris-based digital rights organization Internet Sans Frontières.