Skip to content

Investors need rigorous assessments of Social Impact Bonds

30 April 2018
By Katy Pillai

A major investor highlights the vital role that research and evaluation should play in developing this form of outcomes funding.

Evaluation and research into Social Impact Bonds (SIBs) is a hot topic for Big Issue Invest. We are one of the UK’s leading social impact investment firms, having invested in approximately 350 charities and social enterprises since 2005, in our quest to dismantle poverty and create opportunity.

Our investments support areas such as access to housing, financial and social inclusion, mental and physical health and well-being, and employment, education and training.

We have made several SIB investments since 2012 and have watched the market develop. Our first-hand experience is that there is more work to be done to refine the model, but there have been impressive outcomes for programme participants and the charities delivering the contracts in which we have invested.

How we use research and evaluation in SIBs

We aim to address the structural challenges of SIBs and maximise their individual and collective social impact. Research and evaluation can help in this goal. Many commissioners and service delivery providers are unfamiliar with SIBs and I often direct them to impartial, well-informed research to build awareness and understanding.

Discussing Big Issue Invest’s learnings and experiences from individual programmes with evaluators helps us to contextualise the situation and identify emerging trends in this rapidly-developing field. These partners can develop the tools and models to test and critique our theories and insights ‘from the field’ and evaluate the wider market, whereas our frame of reference is often limited to our specific investments.

Research in social investment and SIBs is at a nascent stage. It is crucial that the right foundations are laid today to enable good-quality ‘market’ level analysis in the future. Consistency and rigour of approach will pave the way for systematic reviews and meta-analyses, essential if outcomes-based approaches are to become commonplace in the commissioner and policymaker toolkit.

We also welcome more quantitative approaches and ambitious evaluations that compare SIBs to traditional fee-for-service mechanisms or payment by results more broadly. SIBs are often conflated with outcomes-based approaches in general, which makes it very hard for investors to assess – and improve – their social impact.

Funding is, of course, needed to allow this work to take place. At the moment, evaluations are too often the balancing item in a very limited budget, constraining their ambition. We wholeheartedly support the calls for a ring-fenced fund for SIB evaluations, recognising the value of the output to commissioners, central government and potentially also philanthropic funders who might seed the fund.

Understanding what, why, how

We are an impact investor: outcomes are our reason for being, not just a by-product of our investment activity. Social impact due diligence underpins every investment decision we make.

We look closely at the theory of change for each SIB. There needs to be a coherent and credible hypothesis for how outcomes will be improved for the programme participants and - beyond that - how the programme could help to tackle the underlying issue through, for example, earlier intervention. We interrogate potential perverse incentives in order to mitigate them. Research and evaluation from previous programmes helps us to do this: we rely on it to validate the causal link between inputs, outputs and outcomes and complete our due diligence of the intervention.

It’s important that everyone involved reviews the theory of change periodically after the contract launches. One of the strengths of SIBs is that they shine a light on what works and what doesn’t, enabling real-time improvements and sharing of learnings for future contracts. If we scrimp on monitoring and evaluation, we undermine the programme and indeed the SIB model.

Evaluations therefore need to be robust, relevant and timely. We want to understand not only the results achieved but also why they are (or are not) achieved and how we can replicate and improve on them. That might be a programme evaluation or an impact evaluation, or qualitative or quantitative approach, depending on the context but we certainly need more than outcomes verification.

We seek insights into the drivers of success so they can be reflected in future projects. The GLA Rough Sleeping SIB in 2012 was divided into two lots awarded to separate providers, one funded by Big Issue Invest. We know the absolute outcomes achieved by the programme but would like to dig deeper into whether different operational or investment approaches had a bearing on success.

We are keen to work with the evaluator community to design the evaluations and contribute to them. It’s important to be confident that Big Issue Invest’s loan has achieved its social objectives – and those of our investors in turn – so we are a consumer of evaluations as well as contributors to them.

We are one of very few organisations that has worked across several SIBs in different regions and policy areas. We can contribute data, insight and practical experience and welcome the opportunity to do so. At a practical level, we can coordinate with the evaluator to minimise the data collection burden on the service provider’s staff and the programme participants. If we can bring the evaluator in to the design phase early, we can also incorporate their evaluation into the delivery model early to avoid duplication or complication down the line.

Using data and analysis to target interventions

Reliable data and analysis is essential to high-quality SIB design. For example, we are involved in an ‘edge of care’ SIB for young children where it is unfeasible to roll out an intensive (in other words, expensive) intervention to all children on social services’ radar. Rather than only work with children on the very cusp of care – when it is often too late to reverse their trajectory or the trauma they have suffered - a researcher is working with commissioner data to identify early risk factors that increase the child’s propensity to enter care. The programme will be targeted towards these high risk children as well as those on the very cusp of care. This allows the commissioner to fund an early intervention service that is also cost-effective, often challenging in outcomes-based commissioning. There is huge potential to harness data and analysis in this way to design preventative services.

The value of timely feedback

Speed of evaluations is a challenge. Evaluations are valuable to SIB stakeholders when developing follow-up programmes and carrying out due diligence. If investors have a good level of confidence in the achievable outcomes, the cost of their risk capital should be lower. That is in everyone’s interest. It doesn’t encourage evidence-based commissioning if the evidence is released after the next programme is launched!

Midline and end-line reviews as part of a formal evaluation are, of course, extremely important but they are not enough. Outcomes-based programmes also need shorter, informal feedback loops, preferably involving the evaluator. Early results and findings can be used to improve programme delivery - but not if they are shared in an end-of-programme evaluation that takes a year to publish. Ideally, we’d like a quarterly or a six monthly check-in with the evaluators that can identify and unpick performance and its drivers.

We recognise the tension between this approach and concerns that the evaluation will influence the programme outcomes. A balance needs to be struck. SIBs support people with complex needs who deserve the best possible chance of better life outcomes, so although evaluation rigour is crucial, we owe it to them to make the intervention as effective as it can be.

Importance of counterfactuals

Three inputs are usually needed to assign a value to a SIB outcome: (1) the projected costs to deliver a programme (preferably validated through a competitive procurement process); (2) the costs per outcome achieved under comparable programmes, if known; and (3) the savings case (the projected benefits for the commissioner). If you don’t have an understanding of what would have happened anyway, at least one of these calculations will be flawed. That’s why we can’t afford to disregard the counterfactual.

That doesn’t mean that every SIB needs to link payments to performance compared to a counterfactual, measured by an RCT. There are lots of factors to consider when designing the payment mechanism and there is no single ‘right’ approach. However the counterfactual can always be taken into account. Under a rate card approach, the rates should be set after considering deadweight – even if the assessment is imperfect, it is better than ignoring it completely. The counterfactual can then be assessed in the programme evaluation and used to inform the pricing of future contracts.

I am not saying SIBs should be commissioned only if there is perfect data to value the counterfactual. Rather, I am emphasising the need for new approaches that measure outcomes and cost-per-outcome to allow commissioners to make evidence-based decisions in future. Big Issue Invest is trialling approaches that allow an outcomes-based contract to be launched with imperfect information, while ensuring checks and balances limit windfall gains and losses and include mechanisms to tackle the information gap.

One option is to run an initial ‘discovery phase’ of the contract for one to two years. The discovery phase outcomes pricing estimates the counterfactual, but sets parameters to ensure that no party makes excess gains or losses. In this way, the partners have the opportunity to implement the SIB delivery model. During this time, outcomes and the counterfactual are measured rigorously. The data and analysis is then incorporated into a revised payment mechanism for the rest of the contract, after which point it operates like a ‘standard’ SIB. This approach bridges the knowledge gap without delaying a potentially high-impact programme or risking inequitable risk and return.
Where next?

SIBs bring together different worlds. The success of SIBs is dependent on partnerships where the whole is greater than the sum of the parts. They require new ways of working for everyone involved – for investors, providers and commissioners. I expect they can seem strange to evaluators as well. Forging new links and understanding the perspectives of others is crucial.

We are starting to see these worlds come together and collaborate for better outcomes. There is more interaction and understanding between researchers and evaluators, policymakers and budget holders, delivery organisations, and investors. It is early days but the outlook is promising.

Katy Pillai is Investment Director, Big Issue Invest: @katyjones | @bigissueinvest

If you wish to receive our weekly blog on SIBs, please email and we will add you to our subscriber list.