In 1989, several years before the world watched the chess-playing supercomputer Deep Blue vanquish the world champion Garry Kasparov, a computer notched up a different milestone in artificial intelligence that was all but unnoticed. This was largely because the evidence was too crude to be publishable.
Just after lunchtime on May 2, someone at Drake University in Iowa — presumably a male student — started an online text chat with a user at University College Dublin. The UCD user’s handle was “MGonz”.
MGonz was not a gentle conversation partner. Their opening gambit was “cut this cryptic shit speak in full sentences”, followed by “OK thats it im not talking to you any more” and “ah type something interesting or shut up”. MGonz hurled abuse and grilled the Drake student about his sex life.
Over the next hour and 20 minutes, the two exchanged juvenile insults and prurient questions, with MGonz goading the student into boasting about the size of his manhood. Finally, the student called MGonz a “stupid homosexual”, MGonz called the student “obviously an asshole” and the student logged off, presumably writing off MGonz as an abusive troll.
But while MGonz was abusive, it was not a troll — it was a simple chatbot programmed by UCD undergrad Mark Humphrys that was left to lurk online while Humphrys went to the pub. The next day, Humphrys reviewed the chat logs in astonishment. His MGonz chatbot had passed the Turing test.
The Turing test was invented by the mathematician, codebreaker and computing pioneer Alan Turing in 1950. The test is simply for a computer to successfully pretend to be a human in a text-based conversation with another human. Alan Turing predicted that by 2000 computers would be able to pass as human 30 per cent of the time in a five-minute conversation. I like to believe he had something more uplifting in mind than MGonz’s exchange with the student.
Turing’s test is a benchmark for artificial intelligence, and a controversial one — but I am less interested in the test itself than in the moral of the story of MGonz’s success. Faced with the difficult task of convincing a human that a chatbot is human, the obvious tactic is to increase the sophistication of the chatbot. Humphrys stumbled upon an alternative: reduce the sophistication of the human. MGonz had passed the Turing test, but is it not also fair to say that the student had failed it?
A good conversation involves give and take, builds over time and exists in a context rather than a vacuum. These are all things that any chatbot finds hard. But MGonz generates plausible dialogue because insults need neither context nor memory. And it is impossible to read the MGonz transcript without thinking of ugly parallels, including the chaotic first presidential debate between Donald Trump and Joe Biden, or any Twitter spat you care to name.
Cathy O’Neil’s book The Shame Machine describes Twitter pile-ons as reflecting “a host of reactions: pain, fury, denial and often a frantic search for acceptance”. It is an environment in which MGonz would thrive.
From political discourse to a chat over a beer, we are at our best when our conversation explores complex issues, is sensitive to context and allows for shades of grey. But complexity, context and ambiguity do not play well on social media. The qualities that distinguish us from MGonz are the qualities that get squeezed out by a fast-moving, sound bite-driven world.
An alternative to insults and bullying is for a politician to repeat scripted lines, from “education, education, education” to “get Brexit done”. That is an understandable response to the constraints of modern communication, but it is hardly an exhibit of the best and deepest political thought.
The philosopher Gilbert Ryle once wrote, “To a partly novel situation the response is necessarily partly novel, else it is not a response.” Ryle is right and clearly these scripted soundbites are not intended to be a response to anything — they are designed to be clipped, shared and retweeted in splendid isolation.
I am not sure I can suggest much to make social media less angry and shallow, or political discourse less pre-cooked and polarised. But each of us can take responsibility for our own conversations. A useful rule of thumb is that if we are having dialogues that MGonz could emulate, we should probably be rethinking our approach.
Brian Christian’s delightful book The Most Human Human explores the history of chatbots, while meditating on the nature of good conversation. Christian argues that chatbots tend to pass for human because we humans set the benchmark so low. So many of our interactions are predictable or perfunctory or downright rude. No wonder the chatbots find us easy to imitate.
Conversation is not easy and, after two years of social distancing, we are all a little out of practice. But the best conversations are delightful, even transformative experiences. So let’s start by vowing to do better than MGonz and see what we can build from there.
Written for and first published in the Financial Times on 18 March 2022.