Given that it is the purest bullshit, the “Blue Monday” meme is showing surprising longevity. While the US this week celebrated Martin Luther King Jr day, the British were reading about what purports to be the most depressing day of the year.
The fantasy that the third Monday of January is Blue Monday was dreamt up by Sky Travel, a holiday company that no longer exists. It is based on an equation linking weather, debt and other factors that is transparently absurd, and was given the faintest air of academic rigour by being endorsed by a psychologist with the title “Dr”. This scheme to sell more package holidays was launched in 2005 — 14 years ago, for goodness’ sake — and is still being used to sell package holidays.
It endures despite some blistering reporting from Ben Goldacre, a psychiatrist, writer, and researcher in evidence-based medicine. A single internet search, a moment’s glance at Wikipedia, should be enough to give anyone pause before citing Blue Monday. Yet we continue; it seems we can’t help ourselves.
Blue Monday is particularly popular, but it is by no means the longest-lived myth. I’ve seen lies about EU cabbage regulations that date back to the mid-20th century and were originally lies about the US government: six or seven decades of misinformation, circulating under the radar to pop up again in the age of social media.
Why do such ideas endure? What do they tell us about our attitude to science, evidence or the truth itself?
The obvious response is that we are too credulous: we’ll believe anything. I’m not so sure. There are plenty of things we should believe, but which many people do not — for example, that Neil Armstrong walked on the moon, that smoking dramatically increases the risk of lung cancer, that carbon dioxide emissions are changing the climate, and that routine vaccines are far more likely to prevent harm than cause it. The risk of believing anything must be weighed against the risk of believing nothing.
In the case of Blue Monday, the basic problem seems to be that nobody cares enough to ask a couple of simple questions. When “experts” “officially” say that it is a depressing day, which experts? What reasons do they give for making the claim? One or two clicks on a search engine reveal the answer: no experts believe this and no good reason has ever been given.
But in the case of climate change or vaccine denial — or, dare I say it, the curious belief in numbers written on the side of a big red bus — the problem is not that nobody cares. It is that people care passionately. They care so passionately that they will go to great lengths to dismiss contrary evidence. The scepticism isn’t lazy; it is energetic. And it’s something we should recognise in ourselves: who can honestly say they have never flipped a newspaper page, turned over the channel, or found someone else to talk to at a party, in search of an opinion that we can agree with?
So should we be more trusting, or more sceptical? Onora O’Neill, whose 2002 Reith Lectures were on the subject of “trust”, sharpens the question — as we might hope a philosopher would. Rather than trying to measure or increase some vague measure of “trust”, she says, we should be aiming for a better ability to trust what is trustworthy and to mistrust what is not.
Restoring trust in the claims of science, statistics, or expertise — while stoking a healthy scepticism of snake oil and pseudoscience — is not something that can be left to any one part of society. If experts — and for that matter, journalists — wish to be trusted, they must provide evidence of their trustworthiness. But the non-experts among us could also do more to keep ourselves well-informed.
How you demonstrate trustworthiness depends on who you are and what you hope to be trusted to do. A good starting point is the list of principles for “intelligent openness” set out a few years ago by the Royal Society in a report, Science As An Open Enterprise. (Baroness O’Neill was one of the report’s authors.)
Intelligent openness requires that the data used to make scientific claims are accessible, understandable, usable and assessable. “Accessible” implies publication online at minimal cost. “Understandable” means claims made in plain language, as clearly as possible. “Usable” may mean supplying data in a format easily analysed by computers, and it also suggests that the conclusions be framed in a way that is relevant to everyday concerns. “Assessable” means that anyone with the time and expertise has the detail required to rigorously test the idea if they wish.
That is something scientists, statisticians, economists and other “experts” can do. What the rest of us owe them — and more importantly, owe ourselves — is to ask a few questions before we spread an idea on social media or rely on it to govern our votes, our diets, or our attitudes to each other.
One or two smart questions, or a moment double-checking with an internet search — that is often all it takes to provide the context we need to make a wiser judgment. It shouldn’t be too much to ask of ourselves.
Written for and first published in the Financial Times on 25 January 2019.