Paris meet up

EvalGender+ members met up on 17th Oct 2019 during the F3E event. Photo courtesy Elsa

Looking for experiences: Interdependence of Monitoring, Learning and Evaluation

Dear Members,

Monitoring and evaluation (M&E) processes can be among the most effective ways to foster learning . Unfortunately, because  M&E is most usually used for accountability, learning has not been established as a primary focus of M&E systems. 

I have to participate in a panel discussion on inter-dependence of Monitoring, Learning and Evaluation. I need your help. If you have experiences and inputs please would you share. I will acknowledge your contribution in my presentation and share back what I learned. Please share by 14th April.

Looking forward to learning from your experience.

Thanks

Rituu

Views: 867

Reply to This

Replies to This Discussion

Minal, do you see a link between monitoring, evaluation and learning? if yes, what is your experience? I would like to share the experiences of members as well. Thank you.

Dear Rittu 

Most Important learning that I have  from M&E on regular basis is

1) data helps to strengthen the system

2) findings  over a period tells us the nature/process /pattern of any  behavior- 

3) sometimes  we can  even arrive  at a decision making like need for change in design of the program

4)what worked well and how to replicate elsewhere.

Thnx

Regards

Minal

Thanks Minal. Who uses this data? Is it used for learning? If yes what is the process for it? What is your experience on the above. Warm Greetings!

Data is used for decision making by gov and donor as well. For instance data from monthly reporting suggested that data person needs training to ensure quality then hands on training during field visit  was held. After WHO evaluation for adolescent health donor decided that intervention design needs to be changed etc.
Minal

Dear Rituu,

First of all, best of luck with your current background paper preparation. In response to your original question, I would like to share a few thoughts: 

In past evaluation training programs that I gave, when asked about the difference between Monitoring and Evaluation, I tended to provide a clear-cut response. In defining monitoring, I used to define it as an ongoing activity consisting in periodic data collection (e.g., monthly or quarterly) aimed to "check the pulse" of a program and intended to specifically verify the level of compliance between results attained on the ground and the originally envisaged results (see your Logical Framework). After that, I provided a definition of evaluation and, in a socratic manner, I encouraged my training participants and colleagues to think about the root of the word evaluation (which is intrinsically linked to a "value" dimension). In doing so, I stressed that evaluation was mostly the determination of worth, merit and social  significance of a program, policy and project (Scriven's definition). In doing so, I stressed the value judgment and not merely the compliance dimension of it (as in the case of monitoring). That's how I used to see things in the past.

Over time, though, my appreciation of M and E changed.  As I mentioned in several of my posts where I introduced the concept  of "the RBM-ization of the evaluation function", I have observed that the distinction in practice between the two (the big "M" and the big "E") has become finer and finer. The result of that has been a stricter interdependence between Monitoring and Evaluation for Learning purposes. 

What I have witnessed in my own practice in particular is the increased tendency in including "evaluative questions" not only in monitoring but also auditing. That's why I have been thinking about a relatively new concept that I would have myself considered unthinkable until a few years ago. I am talking about the concept of "evaluative monitoring", that is, the inclusion of question of value in regular monitoring, often under the time and budget pressures (no time or money to have a full-fledged evaluation and the explicit request for information of quality and worth of programming aspects by the same managers/implementers). Obviously, the principle of independence here could be very much questioned but, if this kind of "mixing" is done for learning purposes rather than vertical accountability purposes (that is, to meet funders' request for specific information), one could have more confidence about the good faith of the whole exercise. 

So, in my opinion, interdependence is growing and this is often due to time/budget constraints and, to a certain extent, the increased ability/capability to intentionally "mix" and "play around with" concepts (e.g., M&E) that used to be the exclusive domain of academic experts and external consultants for quite some time in the past but that seem to be more and more fully owned by a number of "enlightened" development professionals around the world these days.

I look forward to both learning more about other members' experience and reading your paper/presentation.

Michele  

Dear Michele,

Thanks for sharing your experience in detail. Please could you share an example it will be valuable.

Grateful Rituu

Hi Rituu,

Sure. Examples are numerous. However, one that I remember more vividly than others is that of an education program implemented in Ghana in which program implementers used what they called a "qualitative auditing" tool (once again, you will see here that the line between Monitoring, Auditing and Evaluation get thinner and thinners a times). In this specific project, instead of simply checking that schools had been built according to some safety standards that had been agreed upon at the national and international levels (that is classic auditing) and rather than simply checking that the envisaged number of teachers had been hired and gone to teach to the targeted schools on a daily basis as they were supposed to based on the program requirements and rather than simply checking students' attendance and quantity of school supplies distributed (usually associated with the "business as usual" monitoring of an educational program), implementers did the following: with the help of  some community volunteers, they started interviewing children about the "quality" of their teachers and "how interesting and relevant to them" were the things they were learning in school - not once every 2 years but on a quarterly basis. Students were also periodically asked about how they perceived the quality of the food distributed during the breaks and were invited to reflect upon way to improve both teaching and food, based on their respective levels of appreciation. Implementers also asked their local respondents (not only students but also students' households and other key informants within the community) what they thought were some good results and bad results produced by the program on a regular basis (again, this was not an external mid-term evaluation). This introduced the whole notion of unintended consequences which is traditionally dismissed in classic monitoring.

All these questions on the quality and value not only of the services provided but also of the short- as well as medium-term results as experienced by respondents while implementation was underway, introduced an unprecedented "evaluative" dimension in what otherwise would have been a more regular "compliance-driven" monitoring data collection. Program implementers were able to aggregate all responses periodically and, by both comparing them with existing evaluation criteria and synthesizing them, made the best possible effort to make program improvements. 

I hope this example helped illustrate my point. I am confident that other members in our community could provide some examples of something similar to what I described as "evaluative monitoring" in my prior post.. During my recent keynote presentation at the ReLAC conference in Peru, I also talked about "agile and dynamic" monitoring, a concept that Michael (Bamberger) and I also have been discussing during our various workshops on "unintended of outcomes of development programs".

I hope this helps,

Michele 

Thanks Michele for the example. I loved it!  It reaffirmed my own experience of engaging those who have a stake in the project continuously to reflect on what is being done, to self assess and apply this learning to modify the response. What I call is Participatory Action research. Have done it a few times but with the longest duration was an IDRC funded  project with sex workers in Cambodia and India. It had multiple cycles of action and reflection. I used community life competence approach and self assessment framework. 

Am fortunate to learn from you. Merci!

Dear Rittu and Colleagues,

This is a fascinating conversation. Thank you for this opportunity.

I would like to share my experience as independent evaluator in a three year long  employability empowerment program for disadvantaged young women in Recife Brazil. Due to  a series of unfortunate dynamics between stakeholders, a distrust of evaluation was established early on in the project. Preconceived notions of punitive evaluation, skepticism of the validity of evaluation data, and drain on limited financial resources, all contributed to a lack of interest in evaluation procedures and evaluation results by the program implementation staff. It took more than a year to gradually break down these barriers, to build trust, and to have insights from evaluation to be used by staff members in their decision making processes. Key factors in this transition included the usefulness of evaluation results in fulfilling reporting requirements (implementation staff rarely had time for documentation);  use of objective on-site observations and girls’ feedback in staff training and re-training; and systematization  of program successes and learnings. Plus a very large dose of positive interpersonal relations. 

I wonder how other researchers have dealt with this interface.

Thank you,

Ann

Hi Ann,

Thanks for sharing your experience. I can relate to this experience as I have been in an organisation where we were evaluated by external evaluators, where some of us had IDIs which were just to extract data. Some key staff were were not even interviewed. We did not agree with the findings which came in though I must say that the report was a fancy one. The findings never got around to be implemented. 

What has worked in my experience? Participatory evaluation with use of strength-based approach which reiterates the value of each stakeholder and helps bring in voice of the least marginalised.  See what we did recently 

Evaluation from inside out: The experience of using local knowledge and practices to evaluate a program for adolescent girls in India through the lens of gender and equity [online]. 

Evaluation Journal of Australasia, Vol. 15, No. 1, Mar 2015: 38-47. Availability:a href="http://search.informit.com.au/documentSummary;dn=936838345059984;res=IELBUS" target="_blank">http://search.informit.com.au/documentSummary;dn=936838345059984;re...> ISSN: 1035-719X. [cited 31 Mar 15].

Personal Author: Nandi, RajibNanda, Rituu BJugran, Tanisha;

Source: Evaluation Journal of Australasia, Vol. 15, No. 1, Mar 2015: 38-47 

Document Type: Journal Article 

ISSN: 1035-719X 

Subject: Sex role--Social aspectsGirls--Government policyTeenage girls--Health and hygieneTeenage girls--Education;Community development--Data processingIndian girls; 

Peer Reviewed: Yes

This really turns things on its head. Very interesting perspective.

Thank you for sharing.

Ann

Hi, Ritu,

I see a growing tendency toward MEL, instead of M&E, and I think this is a positive trend as it is putting emphasis on the feedback loop and applying the results of analysis of monitoring data and of evaluation to improve interventions.  However, in resource poor organizations it is hard to get to the E, let alone the L. 

Having worked in a few organizations that span different programmatic sectors, I find that the L part of MEL is often contained within specific programmatic areas, in sectors and specific programs.  There the process is M...L.  The "debrief" method is often used for this purpose.  However, when the debrief takes place, often even monitoring data is lacking and the debrief is focusing on the process of delivery of outputs.  Program implementers interact with individual participants and their perceptions are colored by that interaction.

In order to have a really effective MEL system, monitoring data and formative evaluations should be part of the process.  Furthermore, learning should cross sectors and programs, especially where gender-sensitive and women's empowerment approaches are concerned.  For that to happen, program managers and policy makers have to be willing to be open to discuss the "good" and the "bad" with others outside their silos with whom they may even be in competition for resources.  This is not always easy to achieve.

Have you heard of the management method called AGILE?  A sort of antidote for the typical "project management" process, it is frequently used for software development where it is understood that neither the developer nor the participant/customers can know in advance what is "needed."  There must be a process of iterative development and customer feedback is not something that comes at the end, but is an integral part of the development process.  For the resulting product or service to be a success continual learning and adaptation is needed and the ability to continually adapt will help that product or service continue to meet customers' needs.  Creating systems to continually engage participants in this learning process is not only useful for learning together, it is empowering.  Nevertheless, an understanding donor is needed to carry out such an approach, since the traditional log frame approach still seems to be the most common.

best wishes,

Meg

RSS

© 2019   Created by Rituu B Nanda.   Powered by

Badges  |  Report an Issue  |  Terms of Service