A summary and commentary on the full-day event “Implementation and Evaluation” with Stuart Kime from Evidence Based Education at Abbey Conference Centre, 17 September 2018, brought to us by Norwich Research School.
This article naturally follows Stuart’s Cake; getting more out of your CPD through Implementation & Evaluation on the precursor event in May, which discusses the first step of picking the right intervention based on evidence. If you missed it, explore later for full context (and for its dedicated section on Stuart’s highly useful tool for self-reflection dubbed “premortem.”)
Again, thank you Stuart (@ProfKime) for coming; and to Susi, Niki (@chemDrK) and everyone else at the Norwich Research Schools Network (@NorwichRS) for making his visit possible. This post reflects my limited understanding of the session and I take full responsibility for any errors.
A note on vocabulary. Intervention in this text does not mean TAs taking SEN pupils out of the class. From the DIY Evaluation Glossary’s page 4, intervention is used here for “any programme, policy or practice we wish to evaluate.”
(And, as I promised my colleagues from Open Academy, I refer you to Jon Biddle as the man to see about your literacy intervention).
Collaborate and go to paradise
Stuart Kime has a dream which I believe he himself termed his “utopic vision”: A network of hundreds of schools all sharing and developing research/best practice. As he aptly phrased it, “Non-scalable is a waste of time.”
Imagine it: You test an intervention in your school with 30 pupils. Then another school replicates your intervention – that’s 60 test subjects – or perhaps they adapt it slightly and retest its effectiveness. Then two more schools do the same, and so forth. The next schools start expanding that intervention into related areas or use experiences to enlighten similar fields; test, learn, adapt; tweak and twist, grow and develop – with immaculate monitoring and evaluation of each step as it is replicated by more and more schools.
There are very good reasons for schools to break the academic monopoly on research. According to a recent article by Dr Gaz only 15-20% of educational research is cited and the most common number of citations for any educational research paper is 0. So, if education academics themselves find the majority of published papers useless, why should teachers be bothered?
The truth is that the gap between theory and practice seems impossible to span. More of my own worried thoughts on this in the 2015 article … From the mountain to the valley where the best part of a famous-name conference at the UEA was speaking to a colleague in foyer. Recognise that experience?
However, Stuart warned that the key to successful sharing within such a network is that we must be clear about what we are doing if others are going to have a shot at building on our work. In a word: Evidence. This is where teachers have something to take from researchers – not their knowledge, but their skills.
And best of all: Researchers suffer from what Stuart termed the “file drawer syndrome” which means there is a lack of incentive to publish the null or negative findings from research studies. But for teachers, a lead balloon is a success story, because it will show what NOT to do. And, be honest, when is the last time you subtracted from your workload rather than added?
“It doesn’t matter that your intervention fails, what matters is that you learn.”
You will find his Utopian vision echoed across this this blog, as I have been banging on long before the SSIF about how schools who have adopted Cooperative Learning should be sharing best practice, ideas and any tailored materials: “Hey, I use Cooperative Learning with Talk4Writing like this! Try it out!” – “Hello, I just found these amazing metacognitive flashcards on Pinterest! Here’s the link!” – and so forth.
And this is not just because schools should be sharing anyway, but because Cooperative Learning interventions are so ridiculously easy to compare and replicate between classrooms, between sites in MATs, in clusters, and in research networks. Scale up as much as you want, all thanks to the “shared language” headteacher David Oldham mentions here.
It’s both what you share, and how you share it
I fully appreciate that Stuart’s vision of a research-informed network does raise the bar from mere sharing of ideas, resources and good practice: The useful sharing of research requires much more comprehensive documentation, which includes the rationales, planning process, training that preceded the actual intervention itself, etc. “This is what we did, this is the effect it had” rather than “We did something, this is the effect it had” or even worse “This is the effect we think it had, because the teacher in charge of the control group spreadsheet went on maternity leave.”
An obvious example is our terminology: When Stuart says “intervention,” he does not mean “TAs with SEN pupils. When I say, “Cooperative Learning, I do not mean “group work.” (See What *isn’t* Cooperative Learning; a guided meditation….)
This is where Cooperative Learning schools have the advantage: When sharing research evidence and interventions, all Cooperative Learning Interaction Patterns have a unique name and specific, predefined steps. Basic variations and their rationales are clear. Subtasks may be interjected at precise points to arrive at precise outcomes. Even terms such as “Subtasks” and “Instruction Checking Questions” need no explanation, so there is very little to misunderstand between schools who have received the training.
“For our intervention, we used this CLIP with the these target phrases and that subject vocabulary, and we used these instruction checking questions in the first phase. In the second phase we used that CLIP…”
- Sample DIY intervention outline
For useful sharing of cross-school research without burn-out, Cooperative Learning is a simple tool which is already deployed and used throughout your school. No further work needed. For councils, MATs and clusters serious about sharing best practice and research effectively with minimum workload, Cooperative Learning is (pardon my Americanism) a no-brainer, something I hope to demonstrate in the near future (see below).
The receiving end: Faithful Adoption – Intelligent Adaptation
If the innovative “givers” are responsible for clear language, then the role of the “recipients” in our dream network can be summed up with the catchy refrain of “Faithful adoption – Intelligent adaptation.”
“Faithful adoption” means to connect back to the research evidence from the original project (again, so we don’t reinvent the wheel unnecessarily). It means guarding the fidelity of the intervention by actually spending time understanding the documentation. Who were the target pupils? Which pitfalls did teachers encounter?
This is a prerequisite for the next step of “Intelligent adaptation,” which is to make the intervention relevant to your school. Don’t copy-paste an intervention which worked in a outstanding, gender-streamed grammar school in London into you RI primary in the Hebrides Islands. Extreme example, but in any intervention, carefully consider which elements are relevant to effective deployment in your context. (It seems obvious, but you’d be surprised; That’s a warning from Stuart, who has more experience than any of us with deployment of evidence based interventions).
Finally, this all begs the question: What does that “intelligent adaptation” mean in relation to sharing Cooperative Learning across schools? The answer is “Read my book.” (I’ve always wanted to say that!) In fact, thanks to Drew Howard a whole section in “The Beginner’s Guide to Cooperative Learning” is dedicated to that specific issue for the benefit of school and MAT leaders. Read more in Seven-league boots; next steps for Cooperative Learning in 2018-19.
An appropriate measure
This brings us to the actual evaluation, and the vast fields of validity and reliability which are well beyond the scope of this article; I must refer you to the resources provided below, but in essence, these terms mean that your research measures what it claims to measure and that it is consistent over time and context, respectively.
A funny example of poor validity by Mary Whitehouse is the word milk float in a test for Chinese students. However, in practice, your evaluation is composed of two elements, Impact and Process.
Impact evaluation is the “what” of your intervention: Understanding whether or not your intervention has had an impact on attainment. Interestingly, Stuart said there is no obligation to do baseline testing, as the real yardstick is the control group of peers who did not receive the same intervention. Establishing this comparison group is termed the “most important step in DIY evaluation” to understand the impact of the approach you are testing. The DIY Evaluation Guide therefore goes into great depth on setting up this control group (I refer you to pages 8-13).
Process evaluation is the “How” of your intervention and can be used alongside to understand how the intervention was delivered on the ground, including:
- Was it delivered as intended?
- What are staff and pupils’ perceptions of the approach?
- What has worked well and what has worked not so well?
One extreme example Stuart gave of the value of the control group is one teacher saying about his intervention “It nearly killed me” during the process evaluation. Happily, the improvement of his test subjects were praised by all teachers, so it was worth it – “we are teachers after all, the SAS of education, we give our lives, don’t we?” We all nodded at Stuart, except one clever delegate who asked about the control group. Surprise! The control group had the exact same positive feedback. So, maybe this specific intervention is not worth your life?
And that realisation is perhaps the greatest value of in-school research.
Feet on the ground, first steps
Getting back to day to day reality, I am very honoured to be involved in a small scale intervention for the Norwich Research School with Ed Dooley, deputy head at one of the SSIF schools. We are “hoping to measure the impact of Cooperative Learning.”
No! Scratch that! “We are going to measure the impact of specific CLIPs on very specific learning objectives in maths with roughly fifteen pupils from one particular class, by comparing them with a control group comprising the other 15 pupils in the class, who will be taught using exact same subject materials without any CLIPs.” To be precise.
We intend to make this a model scalable intervention for the benefit primarily to the SSIF schools in an attempt to help make Stuart’s dream come true and demonstrate on a meta-level that Cooperative Learning is the ideal vehicle for sharing research and developing best practice, whether your network is the context of a MAT, a cluster or a opportunity area project.
I hope to present a form of log where Ed and I document the practical application of Stuarts advice as outlined in The diy evaluation guide as well as Putting evidence to work: a school’s guide to implementation.
If I do not manage a follow-up article on Stuart’s final advice on analysis, this stage of the intervention will be included here. Details to follow.
- Putting evidence to work: a school’s guide to implementation
- The diy evaluation guide
- Standard for teachers’ professional development
- Using tests for evaluation— EEF’s approach
Some related articles
- EEF Teaching and Learning Toolkit; a Cooperative Learning gloss
- Engaging staff effectively with their CPD; A Cooperative Learning gloss
- “Mum wasn’t good at maths either, love…” Girls, Maths & Cooperative Learning
- Learning Wisely – Living Virtuously: From the mountain to the valley