Risk assessment: Why the Chartered College’s CPD Quality Assurance project might have a negative effect on the value and variety of CPD available to schools.
[5 minutes to read]
The Chartered College of Teaching has just launched another online survey about their ambitious CPD Quality Assurance project.
This project faces a number of serious challenges before even getting off the ground which, if left untackled, could inadvertently lead to a negative effect on both the quality and range of CPD available for decades to come. I therefore urge all MAT and school leaders, staff and any and all training providers, to take a very keen interest in this ambitious project.
Some concerns are raised in Thoughts on CPD accreditation #1 which pushes for a more comprehensive preliminary conversation. This article, however, investigates the show-stopping practical problem that we all know too well: How do we measure the impact of CPD? The system is intended to be a reliable authority, this needs to be figured out. But can it?
Much to their credit, the College and their tertiary partners have included the following question in October’s original survey. What follows is an edited version of my response.
Question #2: What challenges and/or risks, if any, do you see in developing a system to quality assure CPD provision?
As I have hinted in the first instalment, I am positive about the idea in theory but sceptical about the practical reliability and validity of a catch-all system to qualify CPD provision. Let us break that down by looking at two quality indicators used in other sectors, including health, to measure quality of provision.
The sheer complexity of the field of education largely rules out the use of what is termed result indicators. This term simply implies that you measure the outcomes. Sounds simple, but is fraught with peril. For example, pupils may well feel more safe following first term’s restorative justice CPD. But only because the school bully got permanently excluded last June.
Focusing on the provision itself, the new CPD Accreditation programme will measure the thing that may in fact have the least impact on effectiveness.
But even if we forget for a moment the adage that ‘correlation does not imply causation’ and all the other similar pitfalls (which can be overcome in theory), the question remains: Why does the same CPD programme succeed beyond all expectations in one school and fail utterly in another?
Likely because even the best possible training provision cannot stand against inadequate follow-up by the school. My worry here is that by focusing on the provision itself, the new CPD Accreditation programme will measure what may in fact have the least impact on effectiveness of that provision. And measure the one thing that the CPD provider cannot be called to account for, leaving them at the mercy of the roulette wheel.
That leaves the accreditation project with only process indicators. This term implies that the methods used by the provider fulfil certain criteria which again must encompass a (too?) vast variety and mix of teaching methodologies (e.g. oracy, metacognition, feedback), of target areas (e.g subject range, key stage, curriculum) and of provision types (e.g. training, proxy-training, consulting, IT).
Also, while process indicators do secure a baseline, the vast majority of providers in such a competitive field as school CPD would already meet any fundamental quality criteria thus rendering the accreditation pointless. See next section for examples actually found in the survey.
I hope I am proved wrong and I will be very excited indeed to follow this process. However, to reiterate the real value of this project rests entirely upon the quality of the organisation of the preceding enquiry. Only by organised uncovering and combining of the insights of stakeholders will the consultation reach conclusions beyond the individual participants and organisations.
Does your course include snacks?
The survey suggests the following pieces of information could be provided to indicate the quality of CPD provision. (As requested, I have ranked them in order of usefulness where 1 is the highest score). These are prime examples of the potentially superficial nature process indicators.
- Testimonials from prior participants and/or their employers
- Feedback from participant surveys
- Information on the evidence base behind the CPD content and delivery model
- Information on how the CPD is aligned to approaches that have been shown to be effective elsewhere
- An external evaluation of impact by a different organisation
- A provider’s own, internally-conducted evaluation of their impact
- Information on the provider’s expertise (e.g. a biography, qualification, track-record, etc.)
- Reviews of commercial processes (e.g. booking, invoicing, ticketing, venue, refreshments)
With the possible exception of point 5, which established CPD provider does not already provide all the above items to potential clients? The risk here is that this promotion simply goes through an accreditation agency rather than an advertising agency.
As for points 6 and 7 they are not likely to say much about the quality of the CPD: Self-evaluation is a perilous at the best of times and we have all experienced PhDs in education with very little practical insight. (And if refreshments influence the choice of CPD provision, then there is not much hope).
This jump to measuring devices is one of the reasons why I am convinced the consultation needs to start at the most open level possible (counterbalanced by the effective organisation of the conversation I have indicated previously). By their very nature, these suggestions potentially serve to limit the scope of the enquiry before it has even begun.
Some related articles
- EEF Teaching and Learning Toolkit; a Cooperative Learning gloss
- Engaging staff effectively with their CPD; A Cooperative Learning gloss
- “Mum wasn’t good at maths either, love…” Girls, Maths & Cooperative Learning
- Learning Wisely – Living Virtuously: From the mountain to the valley
* * *