Evaluation of UN Women’s Work on the Care Economy in East and Southern Africa
Evaluation of UN Women's work on the Care Economy in East and Southern Africa - Evaluation Report
A regional study of gender equality observatories in West and Central Africa, carried out by Claudy Vouhé for UN Women
Sources: UN Women
This regional study offers an inventory and analysis of the legal framework of gender observatories, their attributions, functions and missions. It is based on exchanges with 21 countries, in particular the eleven countries that have created observatories. It compares the internal organisation and budgets of the observatories between countries, looks at operational practices, in particular the degree of involvement in the collection and use of data, and identifies obstacles and good practices in terms of influencing pro-gender equality public policies. Finally, the study draws up a list of strategic recommendations intended for observatories, supervisory bodies and technical and financial partners.
MSSRF Publication - November 2025 - Shared by Rajalakshmi
Ritu Dewan - EPW editorial comment on Labour Codes
Eniola Adeyemi Articles on Medium Journal, 2025
An analysis of the “soft life” conversation as it emerges on social media, unpacking how aspirations for ease and rest intersect with broader socio-economic structures, gendered labour expectations, and notions of dignity and justice
Tara Prasad Gnyawali Article - 2025
This article focused on the story of community living in a wildlife corridor that links India and Nepal, namely the Khata Corridor, which bridges Bardiya National Park of Nepal and Katarnia Wildlife Sanctuary of Uttar Pradesh, India.
This article revealed how the wildlife mobility in the corridor affects community livelihoods, mobility, and social inclusion, with a sense of differential impacts on farming and marginalised communities.
Lesedi Senamele Matlala - Recent Article in Evaluation Journal, 2025
UN Women has announced an opportunity for experienced creatives to join its global mission to advance gender equality and women’s empowerment.
The organization is recruiting a Multimedia Producer (Retainer Consultant) to support communication and advocacy under the EmPower: Women for Climate-Resilient Societies Programme.
This home-based, part-time consultancy is ideal for a seasoned multimedia professional who can translate complex ideas into visually compelling storytelling aligned with UN Women’s values.
Application Deadline: 28 November 2025
Job ID: 30286
Contract Duration: 1 year (approximately 200 working days)
Consultancy Type: Individual, home-based
Hi
Am planning to do an impact assessment that will assess the impact of a teaching tool used by teachers Unfortunately a baseline wasn't done and its over three years now. Was wondering if comparing to a control group of teachers who haven't used the tool would work. Are there any other methods that can be utilized to assess impact without bias.
Thanks
Priya Anand
Tags:
Thanks Prya for sharing thereby starting this very interesting discussion. I have learnt something from this. However, more importantly for me, I now understand, from this discussion, what was meant by colleagues, who are professional evaluators, when they argued that not every social scientist or professional who has evaluated development projects/programmes is an evaluator. This was said at the 2014 Africa Region Evaluation Association (AfREA) Conference.
From the responses you have received, it sounds like you would be better off using mixed methods, as well as the 'control' group of teachers that you suggested. The focus group discussions might bring in other factors that might validate findings from the other methods. As someone pointed out in this discussion, some desk research including any reports such as project/programme or field reports, might shed some light with regard to the methodology. Analysing the results against the original objectives and a SWOT(C)analysis might also help.
I hope that it is possible for you to share you experience after the impact assessment is done.
Cecilia
Thanks for a very informative discussion but what about using the most significant change approach establish the effect of the teaching tool
Permalink Reply by Priya Anand on October 9, 2015 at 19:03 Thanks Esteban and Will for recommending Michael Bamberger's book. And to Isha, Sarah and Fanaye. Your suggestions are very valuable.
Hello Priya,
Looks like you have enough to get started here, but I wanted to quickly add my voice as this is a topic of interest to me. Maybe just my lens, bit it seems I can still hear echoes of the great Randomista vs. Big Push Forwards debate (https://oxfamblogs.org/fp2p/so-what-do-i-take-away-from-the-great-e...). My favorite outcome of that whole era (now, thankfully, subsiding with a few nasty pockets of resistance) was the DFID paper which outlines the full range of potential impact evaluation methods (http://r4d.dfid.gov.uk/Output/189575/), the counterfactual-based among them. P. 16-23 have a great birds-eye summary of the options out there.
Regards,
William
Thanks to Deo- Gracias Houndolo from 3ie Delhi office for sending a response through email
Dear Rituu,
Thanks for posting this question.
Technically speaking one can evaluate an intervention using a with - without approach. However findings from such a methodological approach would be biased and change that would be measured cannot be attributed to the intervention evaluated. Simply because without a baseline reference, it is Simply not possible to effectively mesure the magnitude of effect induced by the teachers intervention. Please note that baseline data in the treatment group is not enough to limit biases but you also need baseline data in your comparison group. Hence baseline are necessary to measure effect that are attributable to your intervention. Otherwise, just acknowledge that your results are about outcome evaluation.
More need also to be covered with respect to identification strategy.
In any case, secondary data exploration could be a way to address the lack of baseline data in this case.
Best wishes,
Deo.
I was wondering if it helps to measure dose dependent response. Here change after exposure to time to the tools- 6 months, 12 months, 18 months or 2years etc in addition to the methods suggested
Time-series/longitudinal analysis will require data collected over a period of time among the intervention group. there is no baseline for sure but am not sure if any data has been collected during the 3 years of intervention.
Permalink Reply by Elizabeth Negi on October 12, 2015 at 15:55 Dear Priya,
I think you have all the pros and cons to conduct the impact assessment. I don't think the use of a control group of teachers alone will reflect the changes anticipated through the use of the tool.
I do think Participatory methods especially FGDs as already suggested will be the best option. Teachers/schools usually have past records of these may be a source of qualitative and quantitative data if objectives were set clearly at the start of the project
Elizabeth Negi
© 2026 Created by Rituu B Nanda.
Powered by