Fact-checkers are level-headed people in my experience. They see claims circulating online — or in parliament — and check them, clarifying the confusing ones and refuting the lies. They are not prone to moral panics or conspiracy theories. But some of my favourite fact-checkers are starting to warn that the next round of elections in western democracies will be under attack from many fronts — and they are getting little reassurance that governments are thinking seriously about the risk.
That risk comes in three parts. First, democratic elections can have big consequences, and narrow margins matter. The world would look quite different if Hillary Clinton had defeated Donald Trump in 2016, if Trump had defeated Joe Biden in 2020, or if the UK had voted to remain in the EU in 2016. With a modest swing in the vote, any of these outcomes could have happened.
Second, the small number of swing voters who are usually decisive in elections often make up their minds whether and how to vote in the final few days of the campaign. Late surprises can make all the difference.
Third, it is cheap and easy to launch a disinformation attack. Given the two points above, if you were a bad actor — a foreign government, an extremist group, a billionaire hoping to gain influence — then why not give it a try?
I spoke to Will Moy, outgoing chief executive of Full Fact, a UK‑based fact-checking organisation, and to Andrew Dudfield, who is Moy’s interim replacement and Full Fact’s head of artificial intelligence. They painted an unsettling picture of the possibilities.
What if, for example, there is a co-ordinated release of fake and inflammatory images and stories? A few weeks ago, fairly crude fake images of a non-existent explosion at the Pentagon sent a brief shudder through stock markets. The faked images were amplified by a Twitter account with a blue checkmark that appeared to be an official Bloomberg News account — but wasn’t — and by the Twitter account of the Russian state media outlet, Russia Today (it later deleted the tweet). It is not hard to imagine a more sophisticated piece of disinformation being unleashed just as a finely poised electorate goes to the polls, and proving decisive.
The event itself need not be faked. Perhaps a police officer is murdered, or a public building catches fire, and the disinformation attack is to falsely accuse a particular group of responsibility. Another possibility is the last-minute release of confidential information; even true information can be highly misleading if released in a selective way.
A third line of attack spreads disinformation about the electoral process itself — for example, alleging electoral fraud, or trying to suppress turnout by spreading lies about the process for voting, the location or security of voting booths, and even the date of the election. The Latin-American fact-checking organisation Chequeado has seen so many examples of this that it has published a top 10.
All of this has happened before, so it would hardly be a shock if it happened again. But we may not have fully adjusted to the fact that powerful tools for disinformation are now much more widely available. Lies can come from foreign governments, from influencers looking for clicks and advertising revenue, or from bored teenagers. Lies can also be targeted over social media, whispering to voters in quiet corners of the internet, unnoticed by conventional journalists, fact-checkers and commentators.
A new study by Ben Tappin, Chloe Wittenberg and others suggests that, at least for some topics, some fairly basic targeting of a particular type of message to a particular type of person makes that message substantially more persuasive. There is nothing wrong with such targeting — unless these targeted messages are flying under the radar of basic fact-checking scrutiny.
These are some of the obvious possibilities; there are, presumably, other lines of attack that we have not yet imagined. So how should we respond to these risks, while remaining an open society? It is important not to overreact: spreading unfounded cynicism about the electoral process is self-defeating, since one aim of bad actors is simply to undermine our confidence in our own elections.
One possibility is to take a leaf out of Canada’s book. Canada has a “Critical Election Incident Public Protocol” that appoints an independent panel of public servants to decide whether the integrity of an election is under threat, and if so what to do about it. It is a fairly light-touch approach to the problem, but that may well be wise.
Full Fact also suggests that disinformation needs the same kind of framework as severe weather, terrorist threats and so on: we should adopt a scale of one to five describing “information incidents” in a way that specialists can convey clearly to the rest of us just how serious a particular problem really is.
The alternative is simply to hope that nothing bad will happen, and that if something does, the government of the day will act appropriately while also seeking re-election. The potential for conflict of interest is painfully obvious. Equally obvious is that it will be impossible for politicians running for office to be trusted to take impartial and appropriate action about a competition they are trying to win.
“We don’t know what the next election will look like and neither does anyone else,” says Moy. But our current information ecosystem is fragile, and there are many who would be delighted to exploit that fragility — both inside the political establishment and well beyond it. Our unblemished record of being caught unprepared by everything from war to financial crisis to pandemic is remarkable. But at the risk of spoiling all the fun, it might be worth thinking this one through in advance.
Written for and first published in the Financial Times on 30 June 2023.
My first children’s book, The Truth Detective is now available (not US or Canada yet – sorry).