Estimating Impact is hard – Innovation in data collection could be the answer

As a social enterprise, at Balloon we spend a lot of time thinking about our impact. Those who take part in our programmes would happily testify to the impact created (check out other blog posts for examples). In particular, a lot of the impact created is in ‘soft’ things like hope, confidence or skills (e.g. financial understanding) but measuring these are hard.

This leaves us we a key challenge – how can we showcase this impact to external stakeholders who haven’t been on the programme? You might be thinking:

“Easy! Ask entrepreneurs about how they rate their (e.g.) financial knowledge at 1) the beginning of the programme and 2) at the end of the programme and you’ll see the difference”

 Well that’s what we thought too. So that’s what we did. But what we got back was strange. Entrepreneurs were giving ratings that were the same or worse than when they started. Convinced that this was an issue with our data collection method rather than the real impact of the Balloon programme we got to thinking about what could be going wrong.

We came up with a hypothesis: entrepreneurs change their perspective on the programme. They learn how complicated finance can be and take in a lot of new concepts. So, when asked at the end of the programme, they are giving a more accurate appraisal compared to the response at the beginning of the programme.

To test our hypothesis, we trialled an innovative data collection method in recent Fellowship programmes in Kenya and Ghana. We kept the traditional pre-programme and post-programme measurement. However, at the post programme we introduced a subtle change: entrepreneurs had to rate themselves before the programme (post-programme before) (i.e. from memory) and currently (post-programme after). Here are the results:

So clearly, we can see that our entrepreneurs overestimate their position before the programme, as we expect. The mean pre-programme measurement was 8.42 compared to 6.60 at the post-programme (before) measurement. Similarly, comparing both post-programme assessments (6.60 and 8.90) shows a more accurate picture of the progress they make on our programme (all the differences were statistically significant). What this all shows is clear impact created in our entrepreneurs.

Now you might be thinking:

“Sure, but maybe entrepreneurs are just biased in rating their current position as better than before they started as a kind of ‘look how much I’ve developed’ mechanism.”

 We were worried about that too. To check whether this was happening we looked at the percentage difference between the pre-programme and post-programme (after) assessments. We then ranked all the items based on the percentage difference. Here are the results:

RankItemPercentage difference
1Understanding of record keeping44%
2Understanding of marketing41%
3Understanding of business strategy40%
4Understanding of financial concepts39%
5Ability to come up with new ideas to grow the business38%
6Ability to implement ideas into real changes in the business38%
7Confidence with customers/suppliers31%
8Customer interaction skills29%
9Hope for the future28%
10Self-belief in the business and themselves as entrepreneurs24%

Interestingly, the items which are most explicitly focussed on during the programme (record keeping, marketing and business strategy) were top of the list, whereas impact areas which are less explicitly focussed on – like encouraging self-believe – are near the bottom.

This suggests that entrepreneurs are not just applying a blanket rule to all the questions such as: ‘I am three points better on everything’. What is actually happening is entrepreneurs are recognising that greater impact is being created on areas where they have spent more time and energy working on with Fellows. The fact that the percentage difference between the highest and lowest ranked item is almost double gives us even more confidence that this is what is driving the effect.

Estimating impact is a challenge that most social entities struggle with, investing in innovation in data methods could be one way to get around some of the challenges. Impressed with this pilot, we have rolled this process out across our Balloon ICS programme which lasts twice as long as our Fellowship programmes. We are keen to see what effect this has on scores!

 

 

This piece was written by Nicholas Andreou.  Nicholas leads the Insight and Impact function at Balloon Ventures. He spends most of his day looking at M&E data, running research projects and thinking about how to improve Balloon’s impact. Nicholas holds a PHD from the University of Nottingham and has previously held research positions at Harvard University and the World Health Organisation as a student.