All our interventions are evaluated
To create new knowledge about possible solutions to the central challenges in our welfare society, we need to know if our approaches create a positive, sustained and cost-effective difference. That is why all our interventions are subject to impact evaluations of the highest research standards, evaluated with randomized controlled trials as far as possible. All impact evaluations are conducted in close collaboration with external researchers, and pre-analysis plans and subsequent results are always published.
Why do we use randomized controlled trials?
A randomized controlled trial is an impact evaluation method where the people who are offered a new intervention are drawn by random. Those who are not drawn to receive the offer constitutes the control group and will receive the existing treatment, which everyone else receives who are not a part of the evaluation.
Let’s take an example: We have developed an intervention, NExTWORK, which is rooted in a local network of firms that offer interships to young unemployed people. Now we would like to know whether the intervention improves the chances for young people to get into an education or a job compared to the existing municipal efforts they otherwise would have received.
If we would let the young people choose themselves whether they preferred NExTWORK or the usual jobcentre program, it might have been the most motivated young people who signed up for NExTWORK. Later, when we compared the two groups, a larger share of the NExTWORK-youth had started jobs or education than those who received the usual jobcentre program. But that would not be because of NExTWORK; this group would simply have had better chances to start education or job by default. That is why it is important to use a lottery to pick those who are offered the intervention.
Lottery and many participants
Using a lottery ensures that the two groups are as similar as possible, as long as the groups are sufficiently large. That is why we need many participants in the evaluation before we can say anything precise about the impact of the intervention.
Thus, lottery and many participants are two important ingredients in a randomized controlled trial. They ensure that when we compare the NExTWORK-youth with the control group at a later point in time, we know that any difference between the two groups can be assigned to NExTWORK because the two groups otherwise are similar on average.
Is it worth it?
If we find that more NExTWORK-youth are in job or education than the control group, it is important to compare this improvement with how many extra resources NExTWORK takes compared to the existing efforts in the jobcentres to assess whether the intervention is cost-effective.
A randomized controlled trial does not only tell us whether the intervention works but also how large the impact is compared to the existing efforts in the given area. When we compare this improvement with the relative use of resources in the intervention, we can assess whether the intervention creates positive and cost-effective changes and thus, whether society can benefit from the intervention.
When we conduct an impact evaluation, the quantitative results from a randomized controlled trial cannot stand alone. It is necessary to link these results to qualitative insights from the implementation phase: What seems to work, how, why, and under which circumstances. Observations and interview data complement and nuance the quantitative results. Hence, our impact evaluations are supplemented with implementation evaluations.
Evaluations in progress
In addition to the evaluation of NExTWORK we are also evaluating other interventions.
We are evaluating whether the well-being material, Perspekt 2.0, can improve pupil well-being in 4th and 5th grade. During 2020 we will know the first results of this evaluation.
Likewise, we have initiated an evaluation of the early language development intervention TipsByText, where parents to pre-school children receive weekly text messages with tips to stimulate the language development of their children.
Links to our evaluations in progress:
A credible evaluation
It is pivotal for us that the results of our evaluations are credible. That is why we let external researchers direct the evaluations, and why results are always published. To ensure transparency in the data work, we make pre-analysis plans, which are published before we get access to endline data. A pre-analysis plan is a research paper where we decide on how to analyse the data and which results, we will present in the final evaluation report. In that way we can publicly ensure that all – both positive and negative – results will be presented and thereby avoid suspicion that certain results are handpicked.
Link to pre-analysis plans: PERSPEKT