IDH and WSAF Publication of ToolKit
Tashi Dendup Blog
David Wand - Podcast Reviewing Somalia SRH GBV project Performance Measurement Framework
Public Health Journal - December, 2024
Please get in touch with Steven Ariss (s.ariss@sheffield.ac.uk) if you’re keen to learn more or would like more FAIRSTEPS related resources.
ORACLE NEWS DAILY - Article by George S. Tengbeh
IEG & World Bank Publication - October, 2024
Getaneh Gobezie - Two Blogs
EVALSDGs Insight Dialogue - October 23rd 2024
Quick tips to assess the risks of AI applications in Monitoring and Evaluation
recording here, and the Evaluation Insight here.
Value for Women Publication 2024
March 4, 2025 at 6pm to March 6, 2025 at 7pm – Europe
0 Comments 0 LikesIn our daily work routines most of us tend to get so occupied with jumping from one task to another, meeting deadlines and producing deliverables, that we rarely get a chance to take a step back and think about what we’re doing and how we could do it better, and somewhere the purpose begins to get lost. Given this reality, the Evaluation Conclave provided not only a much needed opportunity to reflect on our evaluation experiences, but with all the informed and stimulating sessions, also a conducive structure to do that in.
Katherine Hay’s much appreciated piece during the opening plenary highlighted the need for strategic short term experimentation in development evaluation, and the need for evaluation in the public sector.
The first session ‘Leveraging data and technology to enhance implementation and uptake of evaluations’ by AIDData spoke about a few projects (www.openaidmap.org) that provide free online access to standardised, double blind and geo coded data that can help contexualise and correlate evaluation findings with data from other sources on other indicators such as on the geography, economy and political situation of the region. These projects also organise capacity building workshops to facilitate the use of these websites by different stakeholders, with the aim of particularly strengthening local governments in the development debate.
The workshop on ‘Getting Published’ by Professor John Bradley Cousins from the University of Ottawa was of particular relevance to novice evaluators. Through the workshop Prof. Cousins covered the following topics- the process of publication in a peer reviewed evaluation journal, planning for publication, important considerations, framing the paper, criteria, practical tips, different types of articles, other formats for publication and an extensive list of evaluation journals from across the globe (most of which are listed on the website www.feministevaluation.org). An evident yet important point that he made was that evaluation journals do not accept evaluation findings as evaluation writing for publication.
Patricia Roger’s design clinic ‘Planning, managing and reporting evaluations’ shared hands on experience by taking up the case of a RCT evaluation by Breakthrough. She also spoke about her website www.betterevaluation.org, an international collaboration that shares options for frameworks and methods to improve evaluation.
The session put together under the project ‘Engendering Policy through Evaluation’ discussed the ‘Value added by feminist evaluation’, by discussing two evaluations- one of a workplace women’s empowerment programme and another of a gender blind voice message programme on mobile phones designed to provide information on agriculture and livelihoods to farmers in rural India. The session explicated through the two cases, how feminist evaluation operates within the limitations of the programme and predefined evaluation methodology. The moderator, Shraddha Chigateri, also underscored the need for evaluations to look for progress markers and not transformative change.
Ganpati Ojha and Lamichhane’s session on ‘Appreciative Inquiry’ was particularly refreshing. Appreciative Inquiry is a user focussed approach that aims to bring together different stakeholders to make meaning of a programme and its evaluation collectively, and to look for opportunities for programme improvement (rather than problems).
I also attended Simon Hearn’s introduction to Outcome Mapping. What I thought was particularly interesting was how outcome mapping grades change into change we would love to see, like to see and expect to see. Keeping this spectrum at the back of our minds while designing evaluations will bring us closer to reality, and make them more comprehensive, accurate and therefore useful.
Robert Chambers began his workshop on participatory evaluation by talking about how participatory evaluation is a function of attitude, behaviour and power relations. He exemplified through different tools how participatory methods can also be used to gather quantitative data. For instance, matrix scoring. He other than going into details of using particular tools of participatory research and evaluation, facilitated a very engaging discussion on the questions a participatory evaluator should ask oneself as well as the challenges involved in participatory evaluation. Some of the most striking questions included:
- Whose convenience?
- Whose language?
- Whose indicators?
- Whose priorities?
- Whose values, norms and beliefs?
The challenges included reaching the worst off, convincing donors to give time to evaluators to spend in the field and evolve methods etc.
I also attended the panel on systematic review by 3ie. A systematic review is a review of a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise relevant research, and to collect and analyse data from the studies that are included in the review. Studies are the unit of analysis in a systematic review. Systematic reviews can be very useful in identifying the scope for more rigorous research on a given question, other than for identifying trends and common elements of effective programming. Hugh Waddington’s’ presentation on the systematic review ‘Water, sanitation and hygiene interventions to combat childhood diarrhoea in developing countries’ was particularly interesting, since it demonstrated why and how one could relax particular predefined criteria for selecting studies to be reviewed and bring in more studies for a richer review.
Two sessions I attended from CLEAR South Asia’s workshop ‘Impact Evaluation: Theory and Practice’ were on randomisation and biases. The session on randomisation by Anant Sudarshan urged the participants to look at randomisation as a spectrum, and not solely in binaries of RCTs and non RCTs. Diva Dhar’s session discussed biases in evaluation. The debate on the RCT approach also formed part of several discussions at the Conclave. Given the reality that everyone is inclined to further methodologies based on their own skills, expertise and interest, the relevance of the debate felt questionable. However, on the other hand it is extremely critical in pushing the RCT approach as well as the others to improve and strengthen their application.
In addition to the learning and reflection the Conclave made possible, it enabled one to observe and learn from the different styles of presentation, how they engage with the participants and their impact. What I also really enjoyed were the tea break conversations with fellow and senior evaluators. The most exciting thing about evaluation and in particular this conclave is that it brings together people from different disciplines curious about and eager to learn from each other’s approaches and methods.
Add a Comment
What a fantastic write up Shubh. I learned so much as I did not attend most of the sessions you did. thank you very much!
© 2024 Created by Rituu B Nanda. Powered by
You need to be a member of Gender and Evaluation to add comments!
Join Gender and Evaluation