One of the aims of our programme is to impart new skills and knowledge on the entrepreneurs we work with. Our Insight & Impact team therefore tries to measure change across skills and knowledge to see how well we are doing. About a year ago we published a blog post that described our struggles to measure this impact in a reliable manner. In short, the problem was that when we asked entrepreneurs about their skills and knowledge, they gave themselves very high ratings both at the beginning and at the end of the programme, so that the impact we measured was small or absent.
After talking with them, we realised that the issue was with the measurement method we were using (rather than there being no change at all). At the beginning of the programme, entrepreneurs often don’t have an understanding of what good marketing (as an example) means. The optimism of East Africans led them to give themselves high scores even when they would not deserve them. This gave them very little room to move up the scale at the end of programme measurement.
To reduce the bias in our initial tool, we piloted a new system in which respondents are asked to rate themselves from 1 to 10 on several skills at the beginning of the programme and then, when the programme ends, they need to rate themselves for how they perceive they used to be at the start (i.e. from memory) and how they reckon they are at the end. We thus have three ratings:
- One pre-programme, which is positively biased. This means that scores are higher than what they should be in fact.
- One post-programme (start), which we believe is a better indicator of their starting level. At the end of the programme we ask entrepreneur how they think they were at the start, once they have a more realistic understanding of the gaps and flaws they had at the beginning of the programme.
- One post-programme (end), which we assume is more accurate, because it is given in conjunction with the post-programme (start).
We believe that this system is more reliable (see the original blog for an explanation why) and the data collected from our Fellowship programme showed strong impact. We then rolled out the methodology onto our ICS programmes, curious to see whether the findings would be replicated on this programme.
Even though the two programmes are similar, on the ICS programme volunteers work with entrepreneurs for twice as long (12 weeks) and working groups are a combination of international and local volunteers. We assumed that this would result in more impact created.
What the data show
Data from 104 entrepreneurs on ICS, give a similar picture to what we found on the Fellowship pilot. The results are displayed in the graph below:

As you can see, there is a steady increase from left to right. At the end of the programme, people are aware that their initial skills were relatively low (first dot on the left), and their rating shows a consistent improvement (leap from left to right). The scores in the middle, which represent their perception at the beginning of the programme, are biased: this is because people are often very confident in their means when they start, but after they go through the learning process, they realise that they knew much less than what they thought they did. However, even when we compare the “optimistic” rating they give themselves at the beginning (centre) with the rating they select for the end (right), there is still measurable improvement[1] between the inflated initial score and the more mindful final rating.
Comparing our ICS and Fellowship Programmes
According to the data, the magnitude of the improvement in the ICS programme, was much stronger compared to the Fellowship. These are the percentage changes in each category considering the two post-programme scores (dot on the left and dot on the right in the graph above):

Similarly to the initial pilot, the areas that we focus on more explicitly are also those in which entrepreneurs see biggest improvements: record keeping, strategy and finance. It seems that this technique is finally picking up what we are trying to measure!
What we have learned
In summary, we have additional support for the validity of our methodology. Furthermore, we have discovered that our ICS programme results in much larger reported benefits in the entrepreneurs. The two main differences compared to our Fellowship programme are the use of local volunteers and the time period of 12 instead of 6 weeks. It seems one of, or both of, these factors is important for creating greater impact. We will continue using this methodology in our future programmes and compare it with the needs of entrepreneurs to ensure that our curriculum is tailored to the people we aim to empower.
[1] The improvements in each field are all statistically significant at the 1% significance level.
[2] Some categories changed, so we are not able to report all comparisons.