EEF Blog: Exploring the black box
Anneka Dawson, Evaluation Analyst at the EEF, on the importance of developing successful implementation and process evaluation.
At the EEF, we think that better use of evidence can make a real difference to outcomes for all pupils by helping schools spend their money more effectively. We invest in evidence-based projects and then test these ideas rigorously to give schools and teachers a better idea of ‘what works’.
But while the primary objective of our evaluations is usually to find out what impact a specific programme has on attainment, we also want to know why this is the case.
In some trials a promising intervention might not have had an impact because it wasn’t implemented faithfully or for a long enough period of time. In others, it might be that the programme in question was just not effective.
This is where robust and independent implementation and process evaluation (IPE) comes in. IPE differs from impact evaluation in that it explores the ‘black box’ of the programme being tested; the inner mechanism of what is happening throughout implementation. When done well, it can help give us some ‘big picture’ ideas of why different teaching and learning strategies are more effective than others.
Over five years of EEF evaluations, we’ve learnt a lot about the challenges and importance of conducting robust IPE. To help inform our evaluation processes, we commissioned Professor Neil Humphrey and his team at Manchester University to conduct a comprehensive review of the IPE literature. Their report is published on our website today.
Professor Humphrey defines IPE as ‘the generation and analysis of data to examine how an intervention is put into practice, how it operates to achieve its intended outcomes, and the factors that influence these processes’.
In addition to today's review, and to help our independent evaluators (and others working in social science RCTs) develop their own approach to IPE, Professor Humphries has produced a new guidance document which outlines key principles recommended for a thorough IPE.
At the EEF we fund three different levels of trial; pilot, efficacy and effectiveness, all of which require different aspects of IPE. Pilot trials, for example, require more formative data to help improve an intervention, whereas IPE of efficacy and effectiveness trials should focus on summative data to establish how and why the intervention is working in the way it is.
However today’s report identifies a number of common principles of successful IPE that apply to each of the three different levels of EEF trial. These include combining quantitative and qualitative data and ensuring that the impact and process evaluations complement are as integrated as possible.
Our IPE guidance also helps address two of the seven threats to the usefulness of RCTs identified by Ginsberg and Smith in their widely read paper, Do randomised controlled trials meet the ‘Gold Standard’. Milly Nevill, Evaluation Manager at the EEF, discussed the report's implications for our evaluations in a blog last month and said we would be updating our best practice guidance in line with the recommendations.
First, a robust IPE can help strengthen the logic model and design for a trial and then go on to explore how well the intervention has actually been implemented, avoiding the threat that the intervention is not well-implemented. Second, our IPE guidance addresses the possible threat of an unknown comparison condition; it makes it very clear that we need to collect data on usual practice of schools at the start and end of a trial so we can see what control schools have been doing and we know what is actually happening under ‘business as usual’.
Today’s report is an important document for anyone interested in social science RCTs. and will help us to build further on the research evidence for improving educational outcomes.
Anyone interested in discussing IPE and today’s new documents further, can get in touch with Anneka on email@example.com.