New EEF podcast looks at what we’ve learned so far through our trials
In this second episode of our Trialled and Tested podcast, Jamie Scott from Evidence Based Education speaks to the EEF's joint heads of programmes, Eleanor Stringer and Matthew van Poortvliet, to find out more about our approach to identifying projects to fund, commissioning evaluations, and scaling-up promising projects.
Here’s a full account of the questions put to Eleanor and Matthew:
- 2:02 What does the EEF look for when considering which projects to fund?
- 3:15 How much initial evidence do you need to get EEF funding for a project?
- 3:58 What are the different stages of the EEF evaluation pipeline and why and how might you scale-up a project from efficacy to effectiveness?
- 6:45 Examples of scale-up projects
- 9:20 Are EEF evaluation projects typically coming out of academic institutions or schools?
- 11:19 Why have some trials been re-trialled?
- 14:08 How does the evidence behind the ‘Embedding Formative Assessment’ project differ from other professional development programmes that might not have been trialled?
- 16:41 How does the EEF respond to criticism of the approach to project evaluation, especially a perceived preference for randomised controlled trial designs?
- 21:55 Some EEF trials have found results that conflict with previously-published findings – does this point to a replication problem in education research?
- 24:43 How has the work the EEF has done since its inception improved the scientific endeavour of evaluation?
- 26:33 What do you hope is the lasting impact of EEF project evaluations?
- 28:09 Why are certain approaches or strategies not included in the EEF toolkit, despite there being strong evidence behind them?