Over the next couple of weeks the classes of 2023 will be receiving their A level and GCSE results. Newspaper headlines have already started the scaremongering with estimates of the fall in A and A* grades at A level ranging between 50,000 and 100,00 depending on which paper you read, compared to last year.

This drop is not unexpected. During the two Covid affected years, 2020 and 2021, public exams were cancelled. School centres and teachers were asked to provide grades for A level and GCSE students, producing Centre-assessed grades (CAGs) in 2020 and teacher-assessed grades (TAGs) in 2021. In 2020 schools and colleges were initially instructed to provide a grade for each student and then to rank order students within that grade. This was a somewhat misleading request in my view. Exam boards and the regulator were mainly interested in the overall rank orders, believing that an algorithm they provided would level out the grades and prevent any grade inflation.

However, when it was realised that this algorithm was producing idiosyncratic results, and, following the Scottish government’s decision to abandon a similar approach, the government in England said that the grades centres had determined should stand. In 2021 the government made no attempt to deploy an algorithm, leaving schools and college teachers at the centre of the grading process.

The exams regulator Ofqual indicated that grades would be returning to pre-pandemic 2019 levels in two stages. They weren’t two equal stages though; grades in 2022 weren’t quite brought back down to the half-way point. Students receiving their grades in 2023 will therefore not get grades as high as their peers in any of the previous three years. In this sense the newspapers are correct, albeit their calculations seem rather varied.

Students with CAG and TAG grades have gone on to take university places (which was the main aim of the process) but there are some reports of higher dropout rates; some courses are allegedly proving too demanding for students with higher grades than they might have if they had taken exams. I think this is an oversimplification; there are lots of other issues involved and students have always changed courses or left higher education for a variety of reasons.

There are important implications here for schools and their approaches to assessment. Teachers are innately optimistic and encouraging about their students and their potential. Any time we ask teachers to make some form of overarching judgment about the performance of one of their pupils, by assigning a grade or label to students to describe their attainment or progress, we might expect that optimism to be reflected in their grades. But it depends what you ask teachers to do. They are generally very good at deciding whether pieces of work by one student are better than those of another. This is known as comparative judgment. It’s making absolute judgments that’s the issue. This isn’t just a teacher thing, it’s a human thing. You, as a human, will be able to say instantly and reliably whether one person is taller than another. You won’t be as good as saying exactly how tall they are.

I recently suggested to a group of senior mathematics teachers that they go back to their schools and try some comparative judgment for a subject removed from their own, such as English or drama. I’ve tried this myself (as a former maths teacher) and I reckon I made a pretty good stab at correct comparisons, but estimating an overall grade is a much more difficult task. Even subject specialists often disagree about the ‘correct’ grade for a student. This is why I believe assessment reporting systems in schools should be based on ranking rather than judgment. Ranking is simply more reliable.

As a parent I’ve been sent a variety of styles of report over the years. For example, I’ve received very detailed criterion-based analyses of my child’s performance in the national curriculum, running into many pages and presumably taking teachers hours to process. I’ve also received reports which use language such as ‘emerging’, ‘exceeding’ and ‘mastering’ to describe my children’s attainment.  Neither of these methods tell me what I really want to know as a parent. I need is some way of understanding how my child is performing relative to others. What I really need is a rank order.

When it comes to assessment, teachers should only be doing what they can do well in the time they have available. Schools should not be asking them to do things which go beyond good formative assessment. If a school needs some overarching, aggregated grades or labels (and all schools do need these things for a variety of purposes), then the school should derive these from what teachers record as marks.

Teachers can, and should, assess pupils’ work for a variety of reasons. They should record their marks for some of this work. Anything else should be down to the school, not the teacher. Good assessment packages can help reduce workload; for example they can process and weight marks to generate a pupil rank order, which can be reported directly or grouped in bands such as quintiles. Moving up and down these groupings over time then gives some indication of progress. Comparing these groupings with prior attainment helps to identify those pupils who might be underachieving.

Rank ordering isn’t new. Our A level and GCSE exams produce rank orders of students based on their marks, and grade boundaries are adjusted when grading is determined, so why do some schools find it unpalatable? They will often cite the child who finds themselves bottom of the rank order. I understand their concern, but we aren’t telling this child anything they haven’t already realised; we are simply being honest. They will know if they worked hard or not. Their parents will know too, and may well value the honesty. I suspect I would have been at or near the bottom of the rank for PE, art and drama and my parents were under no illusions about that. I did better in maths, thank goodness!

Picture of Duncan Baldwin

Duncan Baldwin

Duncan Baldwin is an independent education consultant and a former teacher, school leader and headteacher. He was the Deputy Director of Policy for the Association of School Leaders for nearly ten years, leading on assessment and performance.