Monthly Corner

Francois Iradukunda and Et.al., M& E Tool - User Guide

Laura Gagliardone - [EEAP Webinar 13] Summary Notes and Recording - AI and Evaluation of Energy Programs and Policies

DN News Liberia Article, By - Sir-George S Tengbeh

NIITI Consulting - Blog

Independent Evaluation ADB - Publication

Alok Srivastava - Blog

Feminist Policy Collective 

The India Gender Report – the first of its kind – is conceived and envisaged in the context of the many gendered rights that are enshrined in the Constitution of India. The endeavour is to examine myriad essential aspects of the gendered economic, extra-economic and non-economic status perceived from the prism of transformative feminist finance in order to demystify the enabler and simultaneously the de-enabler role of the Macro-Patriarchal State. Each of the 26 chapters, which interlink academics, analysis, advocacy and action, indicate four universal processes across all sectors and sub-sectors: the reinforcement of gender de-equalisation; the intensification of patriarchal rigidities; the deepening of economic and extra-economic divides; the increased exclusion of vulnerable and marginalised groups.
Lead Anchor: Ritu Dewan with Swati Raju

Your 'BEST' experience of engaging communities & stakeholders in Evaluation?

Dear all,

The Participation Evaluation and are we listening task force is working to have a deeper understand of participatory evaluations. Together we have come up with a query we would like to pose the Gender and Evaluation community. 

Query: Your BEST experience of participatory Evaluation in the context of equity-focused and gender-responsive evaluations

Think about your BEST experience in Participatory Evaluation in the context of equity-focused and gender-responsive evaluations. Based on this we invite your inputs. Please follow the format provided.

1. Your name, gender and country
2. How would you define ‘participation’ in evaluation? What does it mean for you?
3. Why was it your BEST experience in implementing participatory Evaluation?
4. Who commission this evaluation? ( government, NGO, funding agency etc)
5. Whom did you engage in evaluation? Why did you engage them? At what stage in evaluation?
6. What was the process or tools you used in participatory evaluation?
7. What worked and did not work, why?
8. How did you address the challenges?
9. What did you learn? How would you do it differently next time (approach, methodology, relation with stakeholders etc) ?


Why this query: We are members of EvalGender+ Task force on Participatory Evaluation and are we listening. The principle of ‘No one left behind’ is the core of the SDGs. Keeping this in mind, we would like to go deeper into the term “participation”: what helps and what hinders actual participation, particularly among most marginalized communities in evaluation.

Action points on the E-discussion
1. We will analyze and prepare a summary of responses and share widely.
2. Our taskforce members will pilot what is learned through this discussion especially from gender and equity lens. We will learn what should be future course of action to enhance participation especially of the most marginalised.

We request for your responses by 30th October 2016

Best regards,
Participation Evaluation and are we listening task force, EvalGender+

Views: 812

Reply to This

Replies to This Discussion

  1. How would you define ‘participation’ in evaluation? What does it mean for you?

My best experience of participatory evaluation was an evaluation that was not only participatory but entirely led by a group of targeted beneficiaries. Instead of simply inviting some beneficiaries (I know we all hate this term, and still we all use it!) to participate and input into the terms of reference or some of the questions to be asked; in this case (actually 3 cases http://goo.gl/2oC42P, http://goo.gl/SrKA49, http://goo.gl/0HQ1Fq), a group of beneficiaries was enabled to evaluate the entire project.

The children selected to be evaluators were 50% girls and 50% boys. Participation was of course voluntary. The selection criteria only dictated that they needed to attend project schools and that they had literacy levels on par with those of children in their grade (not necessarily age ie allowing for repeaters). These criteria were specifically set out to ensure ‘regular’ children were selected and staff were guided not to seek out the best performing children or the most academically talented. We also made specific attempts to include children with disability and in two out of three cases that was possible. We took care that children did not miss any days at school to take part.

They were asked to come up with their own definition of success, based on what the project was meant to achieve, assess the relevance and the focus of what the project had done, with whom, how, and draw their own conclusions, with no interference in their judgements. These beneficiaries collected their own data, to answer the questions they had formulated themselves and analysed the data accordingly. As the project in question did have a gender focus (access to quality education particularly for adolescent girls), the evaluators had to bear in mind gender differences as they formulated questions, collected and analysed the data. Perhaps I should add that this was made even more remarkable by the fact that the evaluators were all children, some in primary schools, the youngest of whom was just eleven.

Perhaps I should add that this is not my definition of participation, but as close as I could get at the time to taking ‘participation to the extreme’ by fully handing over power to the beneficiaries.

2. Why was it your BEST experience in implementing participatory Evaluation?

It was my best experience because of the ‘extreme’ nature of the participation and a full assurance that any judgements arrived at were truly grounded both in evidence and in the views of those intended to benefit from the intervention. Moreover, the process was highly empowering and motivating for the children who led the evaluation and often for respondents as well, including their teachers and other adults within the community and people in positions of responsibility (f.e Ministry of education, traditional leaders etc). This was an unintended by-product of the process but made the experience far more enjoyable and added to its value, in my opinion. Frequently even with participatory process, we find that M&E activities are often extractive in nature. It was extremely rewarding to be part of an evaluation that gave as much as it took.

3. Who was this evaluation conducted for? ( government, NGOs, funding agency etc)

 The evaluation was conducted for an NGO with funding from DFID.

 The purpose of this experience was twofold: firstly to understand the difference made by the project particularly from the perspectives of those directly targeted by it. For this multi-sectoral project which by design sough to see complementarities between different strands of work (governance, SRHR, education, GBV etc), we wanted to understand how or if, the complementarities manifested themselves and contributed to the ultimate goal. Secondly, we wished to see if it was possible for children to evaluate a large and complex project and produce a credible evaluation.

 4. Whom did you engage in evaluation? Why did you engage them? At what stage in evaluation 

The team of young evaluators engaged with all the programme stakeholders, including community members, leaders, teachers, school managements, members of VSLAs and programme implementing staff from the lead agency and partners.

5. What was the process or tools you used in evaluation?

The processes and tools are described in detail in the reports and these are publicly available. The processes can be categorised into:

Facilitation processes to familiarise the children with the programme, data collection and analysis practices, conducting an evaluation etc

  1. Data collection tools suitable for young people or people with limited literacy

  2. Facilitation tools to help arrive at a judgement for different criteria (DAC criteria) and for the programme as a whole. Including processes for analysing evidence from various perspectives (incl gender and differences between children and adults views and needs) and generating consensus (although that was not always possible).

6. What worked and did not work, why 

Given the highly experimental nature of this work, it was rather surprising how well everything worked. The facilitation methodologies introduced (developed for this purpose and never tried before) worked well. The methodologies for data collection and analysis worked  (again, mostly developed for this purpose). In fact the lack of problems or challenges along the way was bewildering (and fantastic!). Perhaps the greatest learning would be precisely that it can be done!

 7. How did you address the challenges?

 The fact that everything did work incredibly well is probably more a reflection of our stereotypes and bias on children’s ability, rather than great planning or outstanding knowledge on my part. Perhaps having a strong relationship with the local staff and already an established track record of delivering high quality work making M&E useful and ‘fun’ for project staff, was a key element that helped overcome any potential opposition to trying something so audacious. I was in the fortunate position of being able to take advantage of my own reputation when telling the staff ‘I really don’t know if this is going to work, but I’d like to try’. Every project asked eagerly agreed and those we could not accommodate in our selection because of time, actually came back and ‘appealed’ our decision as they were eager to be included.

 

However the fact that funding came from an unrestricted source was probably also a factor in the sense that it made failure an option. Whilst we had no obligation to do this (but were strongly encouraged by the donor to be innovative), we were under an obligation to publish and share the report.

In other words, failure was an option but it would have had to be a public failure.

 8. What did you learn? How would you do it differently next time (approach, methodology, tools, relation with stakeholders, communication, logistics, etc) ? 

An unexpected by-product of this process was a really strong and evident sense of empowerment by the participants.

Finally, we learned that, when data collectors are part of the beneficiary group themselves (irrespective of their age), they are able to gain access to different information to what is obtained by enumerators. We also noticed a higher level of participation among respondents when they were interviewed by fellow beneficiaries.

 Because of the innovative nature of this evaluation, we also observed greater interest and engagement with the evaluation findings by project managers and stakeholders, although surprisingly not by the donor or higher level management (neither positive or negative reaction - just silence). Whilst the findings were mostly in line with expectations (solid monitoring systems were already able to detect weaknesses in the project), some interesting nuances and reflections on both the ToC and implementation did come to light through this process.

I was able to take advantage of pre-existing strong m&e systems that already generated a strong base of knowledge and data about the project. These systems also combined qualitative and quantitative approaches and had had become a platform for experimenting with more child-friendly methodologies for data collection. Furthermore these m&e systems had also helped build the confidence of programme staff and stakeholders around the use of qualitative data. This made the ground fertile for the next step. In a different setting, I would probably do things differently.

Ethical concerns ought to be at the forefront of our consideration if we intend on asking vulnerable groups to collect data or participate in monitoring and evaluation.

 9. Your name, gender and country

 Laura Hughston, (F), UK resident. The evaluations described above were carried out in Cambodia, Zimbabwe and Kenya.

Thank you Laura. How very interesting. 

Do you think you were able to conduct this evaluation due to communication and flexibility with the donor? Is this something that can be replicated in the future with other donors? 

HI Josephine, given that these were the very first experiences, I would say that the flexibility of the donor did play a part in giving management the confidence to allocate funding to something so experimental. Now that these three evaluations have been conducted and have been successful, there is no reason for such hesitation in the future. 

These evaluations were relatively low costs (approx. 15K USD) especially considering it was a multi sectoral project. I would therefore suggest that it is perfectly feasible for large projects such as this one to conduct an evaluation completely led by beneficiaries, even when they are minors, and where the donor has requirements for large scale quantitative data collection, it is possible to conduct both exercises in parallel.

Hi again,

I was told the links above have stopped working, apologies for that. Please find the links to the reports here:

Cambodia: http://bit.ly/2fuSLjA

Zimbabwe http://bit.ly/2f9vvqI

Kenya: http://bit.ly/2frUCDd

Hello All,

Thank you for this discussion and asking me to write in further detail to the queries posted here. Apologies for the delay, but I haven't missed the deadline yet, thankfully.

Please find my responses below:

1. Your name, gender and country: Paramita Banerjee, Female, India

2. How would you define ‘participation’ in evaluation? What does it mean for you?

For me, ‘participation’ in the context of evaluation means listening to and heeding the voices of all concerned and taking their assessment of what has changed, what has worked and what hasn’t, seriously. Such participation often needs to be multidimensional, but can never be less than a bi-level exercise. One level is constituted by every segment and stratum of the beneficiary community for whom an intervention is implemented. The understanding of the constituents of a beneficiary community is absolutely essential for an evaluation to be truly participatory, for no community is entirely homogeneous ever. Understanding the power dynamism within a particular beneficiary community, and also among different such communities if more than one community is involved as beneficiaries, is crucial to ensure that the least powerful/ most marginalised voice is also heard and heeded. More often than not, the most marginalised in a community is the adolescent girl, for instance.

At the other level is the task force of the implementing organisation; there also – everyone needs to be heard and heeded – from the lowest paid front-liner to the project director. Sometimes, a third layer may also be involved - when a big national level NGO secures funding and implements a programme through smaller localised NGOs. Or, a project may be jointly implemented by more than one organisation collaboratively. The principle would remain the same: all levels of people involved in each of the organisations must participate in the assessment.

An evaluation culled out from the responses of these different layers of people at both ends is a fair assessment in my understanding. However, it can hardly be over-emphasised that active involvement of all levels of people - both on the implementing organisational side and the beneficiary community/ies side must happen freely; i.e. with consent definitely - but sometimes it is also necessary to ensure responses from different layers of people separately from each other to delimit the effects of hierarchy on free expression of their assessment.

3. Why was it your BEST experience in implementing participatory Evaluation?

It is, indeed, difficult to select one as better than all the other participatory evaluations I’ve done – but under compulsion, I’ve chosen this one since the organisation for which this end-line evaluation was carried out was also fully on board with my approach. That made it possible to actually ensure the participation of the least powerful/ most marginalised at both the community level and the organisational level.

This is an important point, actually, for not all organisations are equally comfortable to allow free and active participation either of all levels of their staff members or of their beneficiary community members. Barely concealed attempts at supervision/ control beat the purpose of listening to community voices. This organisation's willingness to allow me and my colleagues as evaluators to interact on our own with different levels of community members made this particular evaluation one of the most satisfactory ones.

4. Who commissioned this evaluation? ( government, NGO, funding agency etc)

It was conducted for an international human rights organisation of repute for one of their ‘safe and adequate housing rights’ interventions for the urban poor in an east African country. Three different arms of the organisation were involved in implementation: the international secretariat, the east and central Africa regional office and the country office where the project was located. Also, in the country of implementation - two other partner organisations were involved.

5. Whom did you engage in the evaluation? Why did you engage them? At what stage in evaluation?

I had designed the methodology in a manner that the entire assessment was through the voices of those engaged in personal in-depth interviews, as also focus group discussions.

Since the beneficiary communities involved were urban slum dwellers, they were heterogeneous in nature and many types and layers of marginalisation were involved. E.g. with reference to tenural rights, those with a shorter duration of stay were less powerful than those living in these slums for more than a generation. So, representatives from families with different periods of stay were included. The presence of teenage girls and women from every family was ensured. Gender defenders from every slum were included to assess the degree to which the concept of safety included prevention of and protection of sexual and gender-based violence. Survivors of such violence were also included to understand the impact from their perspective.

The details of community participation are provided below:

  1. Focus group discussions were held with members of existing community groups involved in the struggle for ensuring safe and adequate housing rights for the urban poor. It was ensured that members from all the slums covered in the project were present during these interactions. Also, members from other slums not directly covered in this project were included to understand the larger impact, if any. There were deliberate probe questions for understanding the differentials between spontaneously formed community groups existing since before the intervention and those created strategically after the intervention was started. All of these FGDs were conducted without the presence of anyone from the implementing organisations to ensure that community members could speak their mind openly without any possible discomfort.
  2. Semi-structured in-depth interviews were then conducted with particular community representatives who had emerged as vocal, deeply involved and with analytical abilities. Similar interviews were also conducted with survivors of gender-based violence to understand their perspective. These interviewees were selected by community members and not by the implementing organisations.
  3. With teenaged girls and boys, only FGDs were conducted in each of the slums covered under the project - always ensuring: a) the presence of an equal number of girls and boys; b) the presence of different age-groups ranging from 14 to 18.
  4. During all community level interactions, special attention was paid to the presence of members with varying periods of stay, as also of representatives of different tribes/ clans - since many of these informal settlements have issues of tribal/ clan-based power dynamics and conflicts.
  5. Beyond all of these formal interactions, there was deliberate participation in two celebratory events to undertake ethnographic observation of the relationship/ interactions between beneficiary community members and representatives of implementing organisations in a non-formal setting.

At the organisational level, every staff member involved from the lowest paid community outreach worker up to the head of international projects was included. Partner organisations were given the same importance as the main implementing organisation that had commissioned me. In-depth interviews with community outreach workers were conducted with special attention since they also represented the beneficiary communities - though on the pay-role of the implementing organisations.

It was a four-phase evaluation: the first level of interactions was with the international secretariat; the second one was with team members of the regional and country level outfits of the main implementing organisation and the partner organisations. The third and the most elaborate level of interactions was carried out at the beneficiary community level as per the details provided above. A final phase involved a round-table with select team members from the implementing organisations and select community representatives for open discussion on some of the issues that the latter had brought up.

6. What was the process or tools you used in participatory evaluation?

Apart from the scrutiny of relevant project documents as secondary research, primary research at the field and organisational levels involved in-depth interviews, focus group discussions, key informant interviews and ethnographic observations. Collecting Most Significant Change stories from every participant was also one of the important tools used. The details of how these tools were used is already described in my response to the previous question.

7. What worked and did not work, why?

In slum communities where the implementing organisations did not have enough rapport with different power groups, involving every segment was a challenge. This was addressed by holding a separate meetings with the leaders of each of these groups and access members of those ethnic communities through them.

Since the engagement of the implementing organisations with teenagers was limited – involving them in the assessment could not be carried out with equal efficacy in all the slum communities – which was noted as an area for improvement.

Apart from these two major challenges, the time allotted for this end-line evaluation proved to be a constraint - especially because of the complexities of the beneficiary communities as already presaged: need to engage members of different tribes/ clans; members with varying tenural rights. This had to be addressed by working overtime - 16 - 18 hours a day, which gets strenuous over a period of time, obviously.

8. How did you address the challenges?

Please refer to my response to the question above. I've already responded.


9. What did you learn? How would you do it differently next time (approach, methodology, relation with stakeholders etc)?

I am happy with the methodology I had developed and so was the commissioning organisation. So, I really would not want to change anything there – but I’ve learnt the need to incorporate some extra days to: a) accommodate sudden humane problems erupting in beneficiary communities; b) to allow community members to speak in their own, free-flowing way which might at time stray a little from the thematic focus - but interruptions make it difficult for them to get back to what they wanted to say. Articulation is often significantly different from the concise way of academic speaking and important realities emerge from yarns and anecdotes. It is important to allow a narrative to flow out that way, therefore.

Thank you for replying Paramita, do you think key informant interviews, and ethnographic observations are the best way to ensure an evaluation is participatory? It seems those are the tools most often used. 

Response from Jim Rugh, United States through email

Evaluation approach and methodology should be guided by the need of the evaluation. For all community based projects, communities should take the lead in evaluation of the work that has been done. One of my experiences can be found in this paper
Rugh, J. (1997). Can participatory evaluation meet the needs of all stakeholders? Evaluating the World Neighbors' west Africa program.Practicing anthropology19(3), 14-19.
 

Thanks Jim for the response. 
I wonder though with the current frameworks of development, including donor dynamics how much we can really have communities lead evaluations. We all want evaluations to be community led but often the evaluations seem imposed upon. 

Thank you Rituu and the whole community. A word about our questions:

1. Your name, gender and country

Jindra Cekan/ova, Female, USA and Czech Republic


2. How would you define ‘participation’ in evaluation? What does it mean for you? 

It is the core of our work, those whom we listen to, our stakeholders (partners, participants)' views must be at the center of evaluation, for how does international development serve them better unless we listen?


3. Why was it your BEST experience in implementing participatory Evaluation?

SO many... Recently, doing a post-project sustainability evaluation (in Africa), one older, district government official whom we invited to the debrief (based on a core belief - leave learning local) said "no one has ever returned after any evaluation. Could you get us money to do such evaluations ourselves?"


4. Who commission this evaluation? ( government, NGO, funding agency etc)

NGO 
5. Whom did you engage in evaluation? Why did you engage them? At what stage in evaluation? 

Participants and partners, NGO staff, donors. We are all legs of the same sustainability stool.


6. What was the process or tools you used in participatory evaluation?

Qual (RRA- focus groups, key informant interviews) supplemented by Quant household survey (gender disaggregated) and Comparison groups


7. What worked and did not work, why?

I've come to see that qual answers 'why' but as the participants are self-selected, we need to get further feedback from random quant surveys for the fuller picture.


8. How did you address the challenges?

Sequencing  is an issue - do we do the qual before the quant in sustainability evaluations (to shape the survey) or do we do it after to explain the survey findings? Both approaches have great value, it depends on what we know already...


9. What did you learn? How would you do it differently next time (approach, methodology, relation with stakeholders etc) ?

SO much to learn about how to evaluate sustained and emerging impact via post project evaluations! See more about SEIE at www.ValuingVoices.com 

Much more to learn about what SDGs monitor and how attribute results, build national 'evaluative thinking' capacity... Thanks for asking!

RSS

© 2024   Created by Rituu B Nanda.   Powered by

Badges  |  Report an Issue  |  Terms of Service