Whose Voices Count?

Ensuring all views are considered in participatory evaluation

The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.

Participation in development is not a new concept. It goes back some 40 years when practitioners realized the importance of participation and its link to ownership and development impact. In today’s world, citizens voice their concerns: protests in the streets from Istanbul to Cairo and Rio, are just one way of doing so. More importantly, new and affordable technology vastly expands possibilities that citizens use to raise issues and ask to be heard. These same technologies can be used by development agencies to reach citizens – including the poor – to give them a voice in project planning, monitoring and reporting.

Participatory Evaluation: We have come a long way

I’m a great fan of the Most Significant Change method that Rick Davies started promoting years ago. While the authors of the method suggested that it should be embedded in projects from design through evaluation, I also used it successfully in an evaluation over 10 years ago in Papua New Guinea. We asked people in project-influence areas what they saw as the big changes in their communities over time and whether they felt these were good or bad, regardless of what the projects had aimed to do. Their assessment of what success looked like was complemented with more traditional evaluations that together gave us a much deeper understanding about what was achieved or not, and why.

As we know from Robert Chambers’ work, and that of others, participatory work requires being mindful of whose voices are being heard and influencing the process. The way of engaging people in the field needs to be adapted to what makes sense to them – for instance, in Papua New Guinea, we worked with our local researchers to test and agree on participatory methods they felt would work best in the local context – and ensure all voices are heard. We as evaluators need to be aware of risks that one group or another will dominate the discussion – whether it’s because of their wealth and status, gender, ethnicity, age group, or sexual orientation – and design methods to ensure broad participation and d, emifferentiated collection of information and feedback.

Community Driven Development: Ideal for participatory evaluation

Today, at IEG we are conducting participatory evaluation work in a number of projects. Community-driven development projects are a good case in point. They are based on beneficiary participation from design through implementation, which make them a good examples of citizen-centered assessment techniques in evaluation.

In our evaluation, we wanted to:

  • understand how the beneficiaries defined and characterized their own development processes at the individual, household and village level;
  • get a sense how people in the communities defined the meaning of “empowerment”, and the livelihood impacts that were most important to them;
  • capture benefits that are less tangible and often lost in surveys or techniques that rely on quantitative methods, such as how access to finance affected women’s confidence in using money; and
  • ensure that the learning cycle begins and ends with those who were intended to benefit, as well as those who have been left out.

Employing technology to evaluate services delivery

We have used a range of evaluation technology to reach citizens and community members to gather their feedback on service provision.

In Afghanistan, we hired a company to run a local radio campaign that invited people to send an SMS if they wanted to participate in a phone survey. The company called people and asked them for feedback on health and education services. The advantages were clear: we reached areas we could never have traveled to and gathered information from people directly affected by projects and the services they were to enhance. The downside is that we could reach only those who had mobile phones and were willing to participate.

In Senegal, we used an innovative smartphone-based platform to gather beneficiary feedback on the utility and maintenance of sanitation equipment. Working with researchers at the University of Dakar, we carried out a large scale, randomized, and low cost survey that allowed us to confidently report that, at present, 80 percent of the sanitation equipment constructed is still functioning and is considered useful by the households that use them. We also learned that sanitation facilities were more likely to be maintained in households with an able bodied female member present (wife, daughter or sister) since they were charged with cleaning them.

Citizen engagement across the project cycle

These examples demonstrate the value of citizen engagement for evaluation and some of the challenges that come with it. The Bank Group’s commitment to engage citizens early will certainly benefit evaluation as well, as we can build on their views and reconfirm how things have worked out at the end.

This Blog appeared first at http://ieg.worldbank.org/blog/whose-voices-count