Constructing your comparison group.
Random allocation is the most robust way to establish a comparison group, but it is not always practical to run a trial in this way. For example, if you want to evaluate the impact of a new sanctions and rewards system, it would not be possible (or ethical) to give only half the school access to rewards for good behaviour or effort. In this case, a matched comparison group is more suitable.
There are three broad categories for establishing a comparison group:
- Random allocation
- Simple comparison
Random allocation involves one 'population' of pupils being allocated randomly into an intervention group, where they will receive something new (such as a new style of teaching, or a specific intervention to improve numeracy skills for example), and a control group, where they will continue receiving their education as usual. The population may a specific group of pupils (e.g. Pupil Premium), a year group, or a subject group. The most important thing to remember with random allocation is that once pupils have been assigned to a group they cannot change. If you have pupils change groups this will alter the randomness of the random allocation and it will no longer be a valid randomised controlled trial. Randomising pupils is a relatively straight forward process involving three steps. Click here for a step by step guide or watch a short guide here:
The best way of designing an evaluation to test the effectiveness of an intervention is to allocate pupils randomly to one of two groups: control or intervention. This way, we can ensure that there is no bias at play (bias causes lots of problems) and is an important step in making your results as strong as possible.
Sometimes it is not possible to randomise your groups. For example, when an approach is used that requires a cultural change in the school such as a new rewards and sanctions system, randomising half the pupils to comply with the new system will cause chaos!
What is your unit of randomisation?
- Pupils. E.g. some pupils receive a mentoring intervention and some don’t.
- Classes. E.g. Some classes receive verbal feedback and some receive written feedback.
- Departments. E.g. Some departments trial a new lesson structure and some continue with a 3 part lesson structure.
- Schools. E.g. some schools within a local authority, Teaching School Alliance or Academy group, trial one approach to parental engagement and others try a different approach.
You only need to think about the unit of randomisation at this stage.
Matched Comparison Group
When random allocation isn't possible, the best approach is to establish a matched comparison group. The matched group might be current pupils not receiving the intervention, or previous year groups if you have historic data using the same or similar tests. The main limitation of this approach is that there are likely to be differences between the two groups that cannot be controlled for, such as school or class. However, if you use matching you can at least control for known differences. A step-by-step example of how to match is provided here.
Which RCT design is right for you?
You could try a compensation design, whereby children in the control group receive a totally different intervention, one that might help them in another area of their learning.
This can be used when an intervention is intended for the neediest pupils, such as a reading catch-up programme. One group should definitely get the intervention (those behind), another group definitely does not need it (those ahead), but there might be a third group for which you do not know whether giving them the intervention is the best use of resources. Pupils in this borderline group can be allocated at random and their results are compared.
Business as usual design
The control group is taught “normally', i.e. as they were before the intervention (which may or may not be an improvement) was proposed.
When you want to compare two competing approaches you can allocate pupils or teachers randomly to each group. If both approaches are plausible, it might be fairer to allocate randomly than letting people choose which one they do. If you let them choose one might be more popular which could make delivery problematic, and there are likely to be systematic differences in who chooses which version (for example one might be favoured by better teachers or higher-ability pupils).
Waiting List Design
You could try a waiting list design, whereby your control group will receive the intervention, only after you have given it to the intervention group.
Simple Comparison Group?
You could try a simple comparison group whereby you compare your results with the group in a previous cohort (year group), or a cohort in a different school. It is important to understand that there may be significant differences between the two groups. For example, one cohort may have a larger percentage of children with special educational needs, or lower prior attainment. If you use a simple comparison group, it is important to understand the limitations and regard it as a stepping stone to improve your evaluation.
Design intervention inconclusive
We recommend that you think again about the intervention that you want to evaluate and see whether or not you can change it to make the evaluation more acceptable.
Matched Comparison Group
For a matched comparison group you identify similar children from a previous cohort (or school) to measure your intervention against. The main limitation of this approach is that there are likely to be differences between the two groups that cannot be controlled for, such as school or class. However, if you use matching you can at least control for known differences. A step-by-step example of how to match is provided here.
Design intervention complete
As per your design, you should implement the intervention with the group identified. As the intervention proceeds, you will need to check that it is going according to plan