A group of teachers in a primary school were designing an evaluation of a feedback intervention aimed at improving Year 6 pupils’ data handling ability. They had a clear plan for all other steps of the evaluation, were happy with the reliability of the pre- and post-tests they had chosen, but were unsure about the best timing of the post-test.
The post-test could have been given immediately following the intervention (at the end of Term 1), but this would have added another test to an already busy schedule. Consequently, they decided to give the post-test mid-way through the second term; if the intervention had an effect that lasted, it would still be detectable. Consequently, the intervention was implemented as planned in Term 1, but the post-test given just before half-term in Term 2, with all students returning to the same planned curriculum for the duration of this term.
By designing their evaluation in this manner, the teachers were able to mitigate the effects of an additional test in an already busy schedule, while still conducting a well-designed, effective evaluation.
Had the teachers not been concerned about the timing of the post-test, they could have delivered it immediately following the intervention’s completion, and then followed it up later with another test. This would have given them an indication of the short-term and longer-term effects.