The Department of Health is committed to evaluating policy innovations before they become national policy. To this effect, the Department frequently initiates policy pilots in order to assess the effects of policy in practice in a small number of settings before the policy is rolled out nationally. Findings from the evaluations of these pilots are expected to feed into decisions about the future direction of policy.
This project examined how, and for what purpose, recent policy pilots were initiated and how decisions about the organisation of the pilot have influenced opportunities for evaluation. A particular interest is in understanding 'early decisions', i.e. those preceding the evaluation, and their impact on the pilots and evaluation.
This study used a multiple case study design, involving an in-depth analysis of four policy pilots: the Partnership for Older People pilot; the Individual Budgets in Social Care pilot; the Whole System Demonstrators (telehealth and telecare); and the Drug Recovery Payment by Results (PbR) pilot.
The main methods used were interviews with Department of Health officials, pilot site managers and evaluators, as well as an analyis of documents, such as policy statements and evaluation reports.
Preliminary findings suggest that piloting is undertaken for a range of reasons, including piloting for experimentation and for evaluation. Testing whether and how policy works in real life settings are key motivations for piloting. However, as the case studies demonstrate, they are not the only ones. Interviews with officials, pilot site managers and evaluators show that other motivations are also at work, and these influence how pilots are organised and implemented. In particular, these include:
- Piloting as early implementation – pilots which are used as an opportunity for initiating, and for investing in, change locally (e.g. 'pump priming')
- Piloting as demonstration – pilots which are used to show others how to implement the policy successfully
- Piloting for learning – using a pilot as an opportunity for learning about how to operationalise the policy and overcome barriers to implementation (as a 'trailblazer' or 'pathfinder').
These purposes are not exclusive and may co-exist, often in harmony. However, they are likely to inform the choices about the organisation of the pilot, when then affect (and potentially limit) the opportunities for evaluation. For example, the Partnership for Older People pilot was initially planned as an opportunity for learning, with 29 pilot sites selected to implement a wide range of projects to improve the health and well-being of older people. The extensive variation between pilot sites, however, complicated efforts to evaluate the impact of particular projects in terms of effectiveness and cost, which would have benefitted from a more selective approach allowing for greater control of variables.
Preliminary findings were presented at a seminar at the Department of Health in February 2013.
To see the slides, please click here >>
The study findings were presented at a Centre for History in Public Health seminar in April 2013. To view the report of the seminar, please click here >>
Presentations on this study were also given at the Social Research Association Annual Conference in December 2013 (click here>>) and at the UK Evaluation Society Annual Evaluation Conference in April 2014 (click here>>).
An article on the study was published in the Journal of Social Policy. To see the article, please click here>>
An article on the effectiveness of policy experiments was published in Evaluation in July 2015. To see the article, please click here>>
An article on the use of RCTs in policy making was published in the British Journal of Healthcare Management in August 2015. To see the abstract, please click here>>
For an alternative view on RCTs and policy evaluation, see the blog by Don Nutbeam from the Sax Institute, which may be accessed here>>