At the recent Australasian evaluation Society's (AES) conference I found myself thinking more and more about the focus on the E in M&E. That is there is so little said about monitoring. Good participatory monitoring can mean the difference between a successful project and a disappointment.
In the final plenary Professor Patricia Rogers asked what did we, the delegates, consider as a bad evaluation. As I travelled home, 4.5hrs on the train I thought a lot about this. I think a bad evaluation is one conducted when it is too late to do anything about what it finds.
There were the usual sessions regarding gender analysis and gender equality which as presentations were very good. But we don't seem to have come very far in terms of achieving anything like equality, even in evaluation. After last years (2014) AES conference I wrote a blog (it's probably still in the system somewhere) on the Better Evaluation website (which was cross posted to here) on achieving gender equality in evaluation outputs and outcomes (findings, conclusions and recommendations) through ensuring gender equity, firstly in whatever it is that is being evaluated and then in the evaluation itself.
During one session on gender at this year's AES conference the presenter was asked about monitoring. The response was that there was a monitoring framework for the project. My comment to this that it was probably made up of indicators and targets which the so called project beneficiaries were completely unaware. It made me even more convinced that what we monitor should be indicators determined by those to whom they relate. That is, how would say a woman know that she was more, for example, empowered?
But back to participatory monitoring and gender equity.
Most community development practitioners speak the rhetoric of 'grassroots development' and 'evaluation begins at the beginning'. While too often this remains just rhetoric it does not have to be. I use three very participatory tools for monitoring and evaluation - Appreciative Inquiry (AI) informed discussion, the Ten Seed Technique (TST) and the Pocket chart (PC). I use these with both groups and individuals. Here I will talk about just the AI informed discussion and the TST.
Firstly I say AI informed discussion because I use the principles of the 4Ds of AI the phrase indicative questions to initiate a discussion. The 4Ds as I use them are, Discover - e.g. what benefit have you gained?, what have you learned etc; Dream - e.g. what do you hope to achieve (in say the next X years)?; Design - e.g. how will the benefits that you have identified help you achieve your dream?; Do - e.g. so what are you actually doing (about it)?
The discussion generated then is directed and controlled by the person/group responding to the initial indicative question. The Discover helps people clarify their perception of what they are involved in, its relevance to them; the Dream helps people determine objectives/goals; the design helps people set out indicators and strategies; the Do is a bit of a reality check and is an indication of the level of commitment and likely sustainability.
The TST (if you are not familiar with this then type Ten Seed Technique into a search engine and you get a very good, easy to read PDF booklet) is used to allow people to give some degree of quantification to their responses, thought and emotions expressed during the AI informed discussion. You don't have to use seeds either. I use any ten similar objects that are found locally e.g ten stones or shells. This quantification should reflect the discussion from the AI and the discussion generated during the TST exercise. The result (which is a consensus result for group work) acts as a reference point (or baseline) for the next monitoring event so that when these exercises are repeated periodically people can see if the change they wished for/expected is in fact happening or not. Because of the discussion, individually or collectively, causal attribution of change (or lack of) can be made by those participating in the monitoring. After all, monitoring is not just about finding out how things are going, it is also about why they are going as they are and what will be done as a result.
Now, how does all this enhance gender equity you may be thinking!
Whether individuals actively participate or not, as long as they are present and able to see and hear what goes on during the AI and the TST activities, they will have a good understanding of the state, perceived or otherwise, of the project at that time. Their knowledge will be the same as the majority of participants. Even though these data collection methods are very user friendly and culturally and ethically appropriate, some people still are either unable or unwilling to actively participate - but as long as they are present they can't help but passively participate. The process is vary transparent and results in a sharing of knowledge rather than an extraction of knowledge.
Because of the level of participation - which sometimes require segregated groups, ensuring the timing of activities is appropriate and convenient - more people consequently have something to offer to an evaluation. There is greater equity in the knowledge base. This then can lead to greater levels of equality - gender, status, age, ethnicity - in evaluation outputs and outcomes.
I would also contend that this type of participatory monitoring enhances the sustainability of gains made in whatever the change process the project/program is involved in.
Add a Comment