Teachers could predict exam grades better – but not much
Teachers forecast the right results nearly half the time. Few professions can boast such a record
“Half of A-level grade predictions prove to be wrong, figures show, raising questions about the use of predicted grades in university applications. Just 48.29 per cent of the grades forecast by teachers last year were accurate, according to statistics published by OCR, one of England’s biggest exam boards.” – The Guardian, October 22
Is it? There are seven grades available, including the U grade (fail, to you and me) and the new A*, so the chance that a dart-throwing bonobo would pick the right grade is about 14 per cent. Cambridge Assessment, the group that analysed OCR’s exam statistics, reckons teachers are accurate almost 50 per cent of the time. Some 92 per cent of forecasts were within one grade of the result. Better than any bonobo could achieve.
Am I supposed to be impressed that teachers are better forecasters than dart-throwing bonobos?
It impresses me. Few other professions can boast such a record. The psychologist Philip Tetlock, in a landmark study of expert forecasting, once gathered 27,450 quantifiable forecasts from about 300 experts. Over two decades he was able to observe how good those forecasts were. In each case he grouped the range of possible outcomes into three roughly equal ranges chosen based on historical outcomes.
You’re going to tell me that the experts fared poorly against the bonobos.
I am. And Professor Tetlock wasn’t focused on “random-walk” processes such as the movement of stock prices; he was asking for predictions on political and economic outcomes that were, in principle, predictable. The experts, whether journalists, diplomats or academics, fell short. No doubt forecasting exam grades is a simpler matter than making political forecasts, but picking the right grade almost half the time is no bad performance.
But could it be better? I note that grammar and independent schools managed to deliver more accurate forecasts.
Fractionally more accurate. But they had an easier job. The top grades were more likely to be forecast correctly, and I’m afraid top grades are disproportionately the preserve of independent and grammar schools. Teachers at these schools were also less likely to be optimistic. One imagines darling Felix and Felicity drifting into Oxbridge on a wave of effusive fluff from their teachers, but students from independent schools are almost three times as likely to get at least three A grades as those from state schools, at which point it becomes hard to deliver over-optimistic predictions.
But there is a general tendency for teachers to deliver rose-tinted grade forecasts, isn’t there?
Yes. Erroneous forecasts were three times as likely to be too high as too low. That might not be a problem – students whose exam results fail to live up to teachers’ promises are often turned away from the first-choice university to find a place somewhere less competitive. Far rarer is the process of “adjustment”, where a student’s final grades are better than predicted and he or she manages to win acceptance to a more prestigious course than originally admitted for. So teachers may be right to err on the side of optimism.
Sounds like cronyism.
Well, teachers’ predictions can’t match Wall Street analysts’ earnings forecasts for optimism. In the past 25 years, analysts have on average been too optimistic 22 times and too pessimistic three times.
Nobody takes Wall Street seriously any more. But exam forecasts are important: universities use them when making offers.
Universities do take other factors into consideration, but you are right. The situation is hardly reassuring. There is one consolation, though: the results being predicted here aren’t flawless measures of student ability. They’re just a snapshot of how a student coped on the day with hay fever, menstrual cramps or just an unfortunate choice of questions. Teachers may not forecast the vicissitudes of exam season correctly – but who’s to say their predictions may not be just as good, or better, indications of how talented students really are?
So all is well.
Perhaps. A final note of caution. These forecasts were submitted to OCR as late as May, four months after forecasts made for university admissions. For all I know, the forecasts that really matter – those made for university admissions – are optimistic beyond the dreams of Wall Street.
Also published at ft.com.