Predicted grades and university admissions

Proposals to change admissions seem motivated by removing problems with predicted grades. But are these real?

[This article was originally published by HEPI in their collection of essays on admissions reform Where next for university admissions? (HEPI Report 136). All the data used is published by UCAS at]

The largest group of applicants to UK universities are UK 18 year olds, some 290,000 in 2020. Though they have several options the large majority of them — over 97 per cent in 2020 – apply ahead of results to obtain conditional offers for entry. This long-standing system is underpinned by predicted grades provided by teachers. They serve to calibrate applicant and university decisions with the aim that students can end up holding a firm offer for somewhere they want to go without excessive risk of not getting in.


When policymakers think about where next for the university admissions system, their proposals tend not to feature this system of predicted grades. Quite the opposite. The motivation for redesign often seems to ditch them. Two problems are frequently cited. First, that predicted grades are damagingly inaccurate. And, second, they are bad for equality because their inaccuracy disproportionately hits those from under-represented backgrounds seeking entry to the most selective universities.


Published data on predicted grades are not as rich as they should be given their importance. In particular, the cross-tabulation of predicted by achieved A level points, the Rosetta Stone of the issue, remains unpublished. But there are sufficient data to demonstrate than these two supposed problems for university admissions of predicted grades are very likely false. (1)


Predicted grades certainly look very different from exam-awarded grades. Figure 1 summarises the best three A level grades to points (where a single grade is a point, so AAA is 15, BBB is 12, and so on) to compare predicted with exam-awarded. Predicted points are higher than exam-awarded points. They are also more compressed in their range, squashed up against the A*A*A* (18) limit. Given these differences it is unsurprising that an applicant rarely gets exam-awarded points that equal their predicted points. Just 16 per cent of the time between 2012 and 2017 (before unconditional offers started to affect the patterns), and the exact grade-by-subject profile match would be lower still. The pattern for other qualification types differ, but the poor reputation of predicted grades for accuracy stems from these A level properties.

Figure 1 Predicted and exam-awarded points

But this perspective does not reflect the role that predicted grades serve in the university admissions system, or the realities of exam assessment. Predicted grades are better seen as a reliable estimate of the highest grades an applicant might realistically get through intrinsically uncertain exams.


In recent years predicted grades have acted in just this way, communicating the best exam-awarded grades that an individual might realistically get, with realistic equating to a 25 per cent, one in four, chance. This is illustrated in Figure 2. Here the probability (0 to 100%) of an applicant getting exam-awarded points at a certain level is on the vertical axis. The different levels of exam-awarded grades form the horizontal axis, where they are shown relative to the predicted grades. So, zero on this axis represents the applicant getting grades equal or better to their predicted grades. Minus one on this axis means the applicant gets exam results that are at least equal to one grade below their predicted grades. And so on.

Figure 2 Probability of getting exam-awarded points relative to predicted points

The stepped line shows the actual properties of predicted grades in this period. It shows the predicted grades an applicant has (0 on the horizontal axis) are the exam grades that the applicant will reach or exceed around 25 per cent of the time. The applicant will get exam results at least one grade better than their predicted points (+1 on the axis) only around 10 per cent of the time. They will get exam results at least equal to three grades below (-3 on the axis) their predicted grades around 80 per of the time.


The model line on the graph is a simulated distribution of what we would expect to see if predicted grades did indeed work as an upper estimate and the variability of the exam awarded grades was related to that of the normal distribution. It is consistent with what is observed. There is more complexity to the predicted and exam-awarded relationship than the public data and this illustration can show(2). But predicted grades acting as an estimate of the upper quartile of likely grades, with a reliable distribution around than, holds across some very much more detailed analysis.


So predicted grades are not really a poor estimate of average attainment, more a reliable estimate of something like the upper quartile. They are saying: this student has a realistic chance of doing this well when it comes to exams. If you had to choose a single statistic of potential to underpin good matching to university offers then this would probably be it. That predicted grades are higher than exam-awarded grades is often taken as evidence of their inaccuracy. But this is equivalent to saying that an average is not the same as an upper quartile. This is true, but it is not an issue of accuracy. Nor is it a problem for university admissions.


But this does leave the wide range of possible exam-awarded points. Predicted grades are saying: the exam-awarded grades could realistically be this high, but will most likely be a grade or two lower, and could quite possibly be a grade or two lower again. Exam-awarded grades then could range over four points. Not good enough, many might conclude. But this applies all of that uncertainty to shortcomings in the predicted grades. It is unlikely to be this simple.


In 2020 Dame Glenys Stacey observed to the education committee ‘It is interesting how much faith we put in examination and the grade that comes out of that […] they are reliable to one grade either way. We have great expectations of assessment in this country’.(3) The regulator probably had in mind here various uncertainties in marking the scripts. But there are other ways that exam-awarded points can fluctuate which have nothing to do with the underlying ability they are trying to measure. You might feel unwell on exam day, for example. But just the supposed marking uncertainty alone puts the random variability of exam results into the territory of plus or minus two points over three A levels, similar to their difference around predicted grades. Exam-awarded grades themselves are likely not particularly good at predicting exam-awarded grades.


It is not clear whether the range of exam-awarded points seen for each level of predicted points is due to uncertainty in the predicted grades, the exams, or (most likely) a mixture of different kinds of uncertainty in both. The variation of exam-awarded points about predicted points does not demonstrate predicted grades are inaccurate. It points to the difficulty of capturing what is being measured. 


Even if it seems that, overall, predicted grades might actually be accurate and reliable they could still be an unsuitable basis for admissions if they were damaging to equality. The core concern is that groups who are under-represented in the most selective universities (where predicted grades matter most) might be more likely to have their exam grade potential understated by predicted grades. Such differentially lower predicted grades would deliver a double blow: deterring aspiration in university choices and reducing the chances of getting an offer. This is the reasoning for supposing that switching to an admissions system based on exam-awarded grades only would improve equality. But the data indicates this is unlikely to be the case.

Figure 3 Predicted points minus exam-awarded points by equality group

The most under-represented groups in higher tariff universities across the three readily available equality dimensions are POLAR Q1, men and the Black ethnic group. Figure 3 shows how much higher predicted points are over exam-awarded points across these groups (4). The larger this value the more favourably predicted grades position applicants relative to using exam awarded grades. Two of the most under-represented groups, POLAR Q1 and the Black ethnic group have substantially larger values than average. So, it is unlikely that they would have more favourable admissions outcomes in an exam-awarded only system. Men, the third under-represented group, are slightly below average and so might have a small benefit from discarding predicted grades.


But averages can be misleading. The grade distribution is important too. To account for this, imagine a simplified admissions system where higher tariff universities admit the 30 per cent of 18 year old A level applicants who have the highest points. How would the chances of getting in for different groups vary if predicted or exam-awarded points were used as the basis of admissions?

Figure 4 Change in entry chances on switch from predicted to exam-awarded admissions

Figure 4 shows how the entry chances for different groups change when the basis for admissions is switched from predicted to exam-awarded points. Two of the most under-represented groups, would see their chances of getting into higher tariff providers fall if exam-awarded points were used instead of predicted points. By around 5% for POLAR Q1 and around 20% for the Black ethnic group. The entry rate chances of men are similar under the two models.


Perhaps predicted grades do not hold overall detriment for equality but might still give small pockets of unfairness that hit some groups hard. One concern raised here is around the small number of applicants who are ‘under-predicted’. Specifically, whether some systemic unfairness in predicted grades means those from under-represented backgrounds are disproportionately likely to be in that group.


Analyses which claim to demonstrate this generally take a subset of students who end up with very high exam-awarded grades, and then look at their predicted grade background. If their predicted points are lower than the exam-awarded points, the student is said to be under-predicted. Typically, under-represented groups, like POLAR Q1, are found to have larger proportions of high-exam-grade students who are from low-predicted-grade backgrounds (under-predicted) than over-represented groups (like Q5). This drives the conclusion that predicted grades are differentially damaging to under-represented groups.


But these conclusions are very likely wrong. The reason is that the under- and over-represented groups have different predicted grade distributions which are not accounted for. The predicted point distribution for Q1 is shifted towards lower points relative to Q5 (Figure 5). So, for a grouping of high exam-awarded points there will be a greater share of Q1 applicants who can potentially get there by under-prediction than there is for Q5 applicants. For example, up to 55 per cent of Q1 applicants could be under-predicted if they obtained 14 points (ABB), whereas only a maximum of 35 per cent could be under-predicted from Q5 (Figure 6).

Figure 5 Predicted point distributions Q1 and Q5, 2020
Figure 6 Maximum possible proportion of under-prediction by exam-awarded points of applicants.

With these distributions, and the inherent random noise in exam results, when looking at just high-exam-grade students it is inevitable that more of the Q1 group will have got there through under-prediction. It is simply reflecting that there are more Q1 students with lower predicted points than Q5. This will be the case even if predicted grades have exactly the same relationship to exam awarded grades for every group. That is, they are totally fair in that respect.


In practice the actual distribution of predicted grades near to the high-grade threshold and the assumed exam variability drive the patterns. Simulations of this indicate you would generally expect to see 40-80 per cent higher levels of ‘under-prediction’ for Q1 compared to Q5 amongst those with higher exam-awarded points. Such results say nothing about the fairness of predicted grades.


All of these equality analyses are approximations in one way or another. But the data does not provide any reason to suppose that the use of predicted grades in the admissions system disadvantages under-represented groups. The opposite is more likely to be the case for the most under-represented group, POLAR Q1.

Predicted grades and admissions

If you view predicted grades as an estimate of how well someone might realistically do, and recognise that exam-awarded grades themselves have random noise, then there is no accuracy reason not to use predicted grades in university admissions. Teachers are often implied to be incompetent or scheming when it comes to predicted grades. The data says they do a difficult job well. Perhaps clarifying the nature of predicted grades by expanding the current single value to a likely upper and lower level of attainment would help this be more widely understood.


Omitting predicted grades from admissions would result in a poorer matching of potential to places. It would reduce the amount of measurement information about the underlying potential-to-flourish that universities are really looking for, and so increase the influence of random noise in exam results.


The belief that predicted grades harm equality is not supported by the data. The pattern is mixed across under-represented groups, but overall predicted grades are probably more aid than hinderance. Many obstacles stand in the way of under-represented groups getting to more selective universities, but the use of predicted grades in the admissions system is not one of them.


More widely, predicted grades enable an admissions system that affords more time for decision making, and provides structure and security to the process. They also support an orderly and managed process from the university perspective, maximising intakes and letting students commence their studies without undue (and unfunded) delay. It seems reasonable that all these properties are particularly helpful to those from backgrounds with less familiarity with higher education and fewer resources. We would not really know until the system was gone. Policymakers who plan to take this risk on the basis of failings in predicted grades should take care the problems they want to solve are real.

(1) This analysis uses the summary distributions of predicted and achieved A level points in UCAS End of Cycle data resources,


(2) Who you get an offer from and what the conditions are, for example, together with censoring effects at the limits of the points scale. The relationship has also changed through time for a number of reasons, possibly including the deflation of exam awarded grades (


(3) See


(4) These differences are influenced by a complex series of factors, including attainment distributions and application choices, beyond the scope of this note but the general pattern of elevated predicted grades for POLAR Q1 and the Black ethnic group holds in more detailed analysis (for example,