To start, Guy suggested that schools should pursue what he called ‘results plus’. While quantifiable measurements (‘results’) can be useful, we must also evidence desirable, qualitative dispositions (‘plus’). “The form of assessment used in schools should support both,” he said. “There’s no use developing one type if it has a deleterious effect on the other. We should broaden our vocabulary and give ourselves the linguistic freedom to choose the right method, rather than saying ‘measure’, which restricts us to quantitative forms of evaluation.”
Believing that ‘creativity’ is too broad and nebulous a term, Guy outlined the dispositions he felt were most important. These included curiosity, rational scepticism, mindfulness, intellectual humility, imagination, determination and collaboration. While he felt most educators agree on these, defining them firmly is the first step towards determining how to assess them.
Peter Hyman explored how we might evidence these dispositions, highlighting that assessment categories can be useful here. For example, some things don’t require composite grades but are worth tracking over time (e.g. student wellbeing). Other things may require a light touch. For example, students might conduct self-assessments against predefined criteria. Finally, we might wish to carry out quantitative assessments with a formal grade. But even this needs a shakeup: “We can learn a huge amount from what employers are doing with strength-based assessments,” he said. “There’s no reason schools couldn’t create sophisticated versions of these for students, alongside their academic results.”
Guy agreed that the business world – as hard-headed as it comes – is extremely comfortable with qualitative assessment. So why, in education, do we seem to believe that it’s risky or too subjective? The best way to define practical approaches to evidencing qualitative behaviours, he argued, could apply a three-pronged approach. First: does a disposition become less support-dependent over time, e.g. do students learn to ask questions without encouragement? Second: do they learn to manifest behaviours in subjects outside those in which they first developed them, e.g. do they begin asking questions in different contexts? Third: does understanding grow more sophisticated over time, e.g. do students start by asking blunt questions but gradually grow more refined?
At this point, Rachel opened up the discussion to attendees. One made the point that this discourse is not new. Strength-based assessments have failed before – why is now any different? Peter argued that the pandemic is the differentiating factor: “There’s a growing set of arguments, exacerbated by Covid, that this moment in time is different from what’s come before. While we need to learn from past failures, we’re in a better position to implement change now than we’ve ever been.”
Another attendee asked how these theoretical discussions should be put into practice. In response, Bill Lucas shared some of the global research being conducted by Rethinking Assessment. “There’s a rich palette of successful evidencing methods we can use,” he explained. “There are well-validated psychometrics. There are useful self-reporting inventories. There’s a battery of different kinds of performance-based assessments… mastery transcripts… micro-credentialing… gaming and simulations. The future is visual and it’s digital. I believe we’re going to have a much richer picture.”
Looking forward, the panellists agreed that as employers move away from traditional qualifications, education will be forced to follow. Things like e-portfolios (not grades) will become the norm for evidencing a student’s work. Bill Lucas passionately summed up the sentiment: “I’d like to imagine a world in which we don’t have ridiculous binary discussions about whether we want young people to be kind and to think for themselves, or to be brilliant ‘academics’. We can have both.”
A huge thanks to all the panellists and attendees. Stay updated with the ongoing discussion at edge.co.ukand rethinkingassessment.com