Menu
 

You are on the Natcen site

Click here for Scotcen

natcen map

You are on the Natcen site

Click here for Scotcen

natcen map
 

Measuring the BIG Lottery Fund’s impact: quantifying the diverse

Posted on 09 September 2014 by Martin Wood, Director of Longitudinal Surveys .
Tags: Big Lottery Fund, benchmarking, charities, funding, measuring impact, volunteering

Martin WoodIf you haven’t heard of the BIG Lottery Fund, it’s a grant-giving body that funds all kinds of third sector organisations and projects. From saving bees to caring for older people with cancer, it helps over 2.8 million individuals across the UK annually. It supports 90,000 volunteers, clocking in a total of 4 million hours, divvying up an astonishing £600 million per year. So where do these numbers come from, and how does the Big Lottery Fund know they’re accurate?

Well, they asked us. Our task was to design a robust blueprint for gathering data across the hugely varied projects, so I thought I’d explain how we helped the Big Lottery Fund arrive at these numbers and the tools we created for updating these numbers year-on-year.

The first thing we did was to scope out what the bare minimum of information we could collect was - we didn’t want to create any more work than necessary for organisations on the ground. Grant-giving bodies told us that they would like to know who benefits from the money granted and understand their impact on the third sector more broadly. There was a careful balance to be struck between data that was generalisable and data that was meaningful; we narrowed it down to five areas on which to collect data: service users, employment, volunteers, organisation and facilities.

The second thing we did was to pilot our questionnaires. As each project was reporting on its own data, we had to be very clear about the numbers that we were looking for and distinguish ourselves from the Fund’s own auditing process, this would insure the accuracy of the data collected.

To make it clear to projects what we were looking for, we opted for a telephone interview and provided interviewers with concrete examples. We also sent them a ‘datasheet’ a couple of days before the interview, to allow them to prepare as much as possible. We explained some of the pitfalls of this kind of data, too, in an effort to avoid them. For example, we were keen to avoid the double-counting of services users – those who may use the same service more than once being counted as separate users.

Finally, we asked the projects to tell us how reliable the data was, for example a ‘precise figure from records’, a ‘good estimate’ or a ‘best guess’ and matched the levels of certainty to margin of error. And, we made sure that our sample was representative of a range of differently sized projects.

The results helped BIG to come up with those all-important numbers, and to some extent, compare between different projects. But that’s not the sole benefit of the project. Crucially, this model of data collection will help other grant-giving organisations and be a positive move for the sector as a whole. The next step in understand the impact of such funds is for the pooling of data, something that the Big Lottery Fund is already talking about doing.

You can read more about this work here.

comments powered by Disqus
Blog filters
Year
  • 2020
  • 2019
  • 2018
  • 2017
  • 2016
  • 2015
  • 2014
  • 2013
  • 2012
  • 2011
  • 2010
Month
  • Jan
  • Feb
  • Mar
  • Apr
  • May
  • Jun
  • Jul
  • Aug
  • Sep
  • Oct
  • Nov
  • Dec
NatCen/ScotCen
Clear Filters

Subscribe to the RSS Feed: