Sir Kevan Collins, CEO of the Education Endowment Foundation recently argued that “If you’re not using evidence to inform your decisions, you must be using prejudice.”
But how, as a school leader, do you know which evidence to use in your context? How do you evaluate its impact? And how can you ensure that it is embedded within your school in a sustainable way?
Stuart Kime (@StuartKime) Director of Education at Evidence Based Education, helped us unpick this conundrum in yesterday’s event Implementation and Evaluation at Notre Dame High School. Thank you Stuart for coming; and to Susi and everyone else at the Norwich Research Leads Network (#NorRel) for making his visit possible.
The bulk of this article outlines some of the key points I believe will be valuable to schools generally and specifically to the schools I have been, or am, working with, including those involved in the Sheringham maths SSIF project. Especially leadership in supported schools will find useful suggestions on how to make more out of the free CPD with less investment of time.
At the end of the post, I will briefly describe how Cooperative Learning simplifies the four phases of implementation suggested in the School’s guide to implementation: Explore, Prepare, Deliver, Sustain.
This post reflects my limited understanding of the session and I take full responsibility for any errors.
Context before cake
As a consultant, I came to the session with a slightly different perspective than that of a school leader: I don’t need the EEF Teaching and Learning Toolkit to tell me that Cooperative Learning works. From Syrian refugee camp shack schools in southern Turkey, to multi-faith schools in the Midlands, to village schools in Norfolk, to London Colleges; turning your students into your primary teaching resource just makes sense.
Truth be told, I have to admit that I have always been quite sceptical about “evidence” – especially in our sector. Partly due to the politicised nature of education, but especially because of the number of seemingly uncontrollable variables in schools.
However, the obvious counter-argument is the bias inherent in any such intuitive assessment, and the session opened with plenty of exemplary warnings. For example, you would intuitively assume that praising pupils would increase self-confidence and therefore raise attainment, wouldn’t you? It doesn’t raise attainment. In fact, according to Stuart, on average, praise appears not to have an effect on students’ achievement.*
A plethora of such curveball examples made his propositions all the more convincing and he presented his experiences and reflections with such passion that I was caught from the get-go on a warm sunny afternoon at Notre Dame High School. That takes some doing.
Stuart made reference to two free online resources from the Education Endowment Foundation that are absolute must haves for any school leader looking for return on investments in CPD and consultancy, and which will be referenced throughout.
Furthermore, I advise looking at the DfE’s:
The Grand Unifying Theory
The DfE’s Standard for teachers’ professional development Implementation guidance for schools points to three groups of people who must each do their jobs in order for CPD to be effective (See table, p. 11):
- Providers of professional development,
- School leaders.
I know that, as a provider, I do the legwork to ensure that every single CPD session I run is tailored to each client, based on the knowledge that has been made available to me.
I also know that, because I teach Cooperative Learning through Cooperative Learning, rather than through theoretical presentation, I have ample oral and written evidence that the vast majority of delegate teachers are highly engaged throughout and most walk away fully convinced that this will have an impact on the pupils and rearing to go. The way Cooperative Learning facilitates real-time feedback also means I can quickly change tack midsession if needed.
This leaves school leaders as the dark horse over which I have no control.
So, here’s the million pound question: Why does one school invest less than one pupil’s PPG and completely turn around in less than a year, and another school pours in £2000 on CPD and consultancy visits, yet get nowhere near the same impact in two?
Specifically, what I was looking for Stuart to answer was: “How can I secure an effective follow-up by leadership in the implementation and evaluation stage” Inundating headteachers with emails and phone calls is counter-productive and turns me into an obnoxious ogre. Yet, leaving them to their own devices, knowing full well that they will not get the impact they have paid for, is morally wrong – and bad for business, too.
Stuart provided the answer in more ways than one, in an eliciting, round-about way carried by his pleasant demeanour more than his – ample – focus on hard evidence.
“Professional development is most effective when school leaders ensure that school, subject, phase and individual development plans are coherent and supported.”
– Standard for teachers’ professional development, p. 11.
Since the Regional School Commissioner’s endorsement, Stalham Academy has been all the rave and everyone is looking to replicate their incredible achievement of moving a special measures school into the UK’s Top-500 league.
This is the reason for the series “Stalham Academy, what went right?,“ which outlines in detail how Andrew Howard, Glenn Russell and their staff went about it.
However, what we have seen is that some school leaders and teachers visit Stalham for a few hours and attempt to replicate snippets of what they have seen, after which they write of Cooperative Learning as a fad.
To avoid such unhappy misunderstandings, I intend to use the example of what I will term “Stuart’s cake” from this day onward. His simple message was: If we want to bake a specific cake, follow the recipe.
You visit a friend’s house and taste the best cake you’ve ever had. You beg your friend for the recipe, go home, and then decide to leave out the chocolate, the sugar, and halve the amount of time you leave it in the oven. It looks like this. You then call your friend and tell him off.
So, here’s another question: If Einstein says that the very definition of insanity is to repeat the same thing over and over expecting different results, then what should we think of someone who expects the same result by doing things differently over and over again.
It seems so common sense. But with this example from one of the country’s foremost experts on evidence and implementation, this message will hopefully stick.
So next time they pick up the Toolkit, Stuart advised that decision-makers look into the details and find a specific recipe, that has the greatest chance of working their school, to read it carefully – and then actually following the recipe.
And, even when we do follow the recipe of best-practice, Stuart reminded us that evidence is not a prediction. An indication, yes, and a basis of informed decision – but not a guarantee.
The mean law of averages
An important warning Stuart gave in relation to informed decisions is that the effect sizes outlined in the Toolkit are averages. In theory, this means that zero-impact interventions such as Arts participation may in fact be based on one research project which gave -2 years of progress and one research project which gave +2 years of progress.
As a consequence, picking up the Toolkit, and deciding on Feedback arguing that “The Toolkit says it gives +8 months of additional progress per pupil per year” won’t necessarily yield those results in your school. Which of the interventions measured gave the best results? Which one would be most relevant to your pupils? Which package can you reasonably afford with the time and budget you have available?
And I think that was another of Stuart’s key points: Do not be vague in your objective. Anyone who has ever done CPD with me, especially Project Leads of the SSIF, will recognise this message: Nebulous objectives will simply never get us where we want to go. This applies equally to teachers deciding how to run the lesson and school leaders deciding how to run the school.
Making strong decisions for the betterment of the people for whom we are responsible is not always received well and is seldom without conflict. “Mr Harrison is so mean, he gives us lots of homework!” But, sometimes we have to be mean to rise above average.
The first step in the implementation process is to decide where we want to go before we start thinking about how to get there.
This would apply equally to picking the intervention and to the ensuing evaluation of impact; and in fact the two tie together. Stuart drove home that we must pick our intervention type based on a very sharply formulated identification of very specific problems in our own school. Catchall phrases such as ” improved outcomes in maths” or “more pupil engagement” won’t cut it. Not on a lesson planning level, and not on a school development level. So, what do we want? Stuart’s advice was to be explicit. Going all over the place brings us nowhere.
Similarity, we also need to frame our evaluation question in a very precise language to measure the impact of our chosen intervention. I think would make a lot of sense to connect our original definition of outcomes that prompted us to pick the intervention in the first place to the evaluation itself and to plan the two at the same time.
Just to remind ourselves, Utopia means “nowhere.” Once we have identified our problem, we set about finding that specific programme that seems most likely to give us a cost-effective solution based on the evidence to hand.
Now comes the less happy bit. Should we actually be doing this incredible and majestic program that we hope will solve our specific problem?
This is another one of Stuart’s I will not soon forget and one that’s certainly a novel way to kick off a school improvement project: Sit down and imagine that our project results in utter and dismal failure.
With a clear picture in your mind of yourself before the Summer holidays stuck with negative impact on attainment, disenfranchised, overworked and angry staff, disheartened children and disappointed board members and parents and a £3000 hole in your budget, you are now ready to reverse-engineer your failure.
Is it conceivable that we ignored the EEF’s A SCHOOL’S GUIDE TO IMPLEMENTATION and did not “create a shared understanding of the implementation process and provide appropriate support and incentives”? Or did we not “Complement expert coaching and mentoring with structured peer-to-peer collaboration”? Or maybe we forgot to “Plan for sustaining and scaling an innovation from the outset”? (A SCHOOL’S GUIDE TO IMPLEMENTATION, pp. 8-9).
If such a “pre-mortem” seems too brutal, consider Nancy Cartwright’s “tree” strategy: Pick any support factor that is needed for your candidate intervention to succeed. Then ask simply a yes or no question whether you can manage that. If yes, continue the tree to the next support factor. When you hit a no, then DON’T go ahead with that intervention. Shouldn’t take more than five minutes. Saves five months of headache – literally.
Implementation and Evaluation of Cooperative Learning
Looking at the suggested structure of implementation and evaluation suggested in the School’s guide to implementation, you will find that Cooperative Learning simplifies all four phases (See page 10).
Deciding to go for Cooperative Learning is easy, because, though it is a simple, practical and coherent system which may be implemented effectively in all phases by even newly qualified teachers and teaching assistants after a few hours of training, very little tweaking allows a razor-sharp focus on specific areas of teaching and learning. Please explore this blog for ideas and examples.
Developing a clear logical and well specified plan is streamlined by the clear, logical and well specified structure of Cooperative Learning Interaction Patterns (CLIPs). You simply decide which CLIPs to deploy when in which classes. Most of my courses come with a roll-out plan which walks you through the first steps of this process.
Cooperative Learning CPD is best delivered in short bursts. I personally prefer twilights, because it minimises the logistical issues as the time has already been allocated for staff meetings. Aside from getting teaching assistants on board, there is no further planning, and no cover expenses. Neatly spacing out the twilights and interspersing coaching observation visits secure a metered effective roll-out.
Cooperative Learning makes teaching as well as learning almost painfully visible, which simplifies planning, execution and evaluation of its deployment: One glance through the open classroom door is enough to ascertain whether it’s being done or not, and done well – there is no in-between.
This practical clarity also helps to simplify both formal improvement loops and basic sustainability. As noted by several headteachers, Cooperative Learning hits multiple areas of the SIDP and creates a shared language which helps teachers and leadership communicate and disseminate good practice with less formal and time-consuming structuring.
Cooperative Learning also provides a high volume of assessment data for your evaluation process, not only of the final outcomes, but the processes that lead to them – in every lesson, from every pupil – in the form of written evidence should you so desire. (My engagement with SOLO taxonomy was actually spurred on by a wish to filter the information overload).
There is a lot more to be said about the School’s guide to implementation and The diy evaluation guide. I am hoping to write dedicated articles on both of these. Get notifications of related posts on Twitter or join the COGS mailing list for updates.
Please visit evidencebased.education for more information on Stuart’s organisation.
- “Stalham Academy, what went right? I-IV Complete Series
- Feedback strategies & Cooperative Learning
- EEF Teaching and Learning Toolkit; a Cooperative Learning gloss
- Engaging staff effectively with their CPD; A CL gloss
*) Feedback at the level of the self (of which praise is often a form), is generally ineffective (though this doesn’t mean it has an adverse effect), whereas feedback about the task itself, the process undertaken to complete the task, and the self-regulation of students in completing it are often effective.