Evaluation of UN Women’s Work on the Care Economy in East and Southern Africa
Evaluation of UN Women's work on the Care Economy in East and Southern Africa - Evaluation Report
A regional study of gender equality observatories in West and Central Africa, carried out by Claudy Vouhé for UN Women
Sources: UN Women
This regional study offers an inventory and analysis of the legal framework of gender observatories, their attributions, functions and missions. It is based on exchanges with 21 countries, in particular the eleven countries that have created observatories. It compares the internal organisation and budgets of the observatories between countries, looks at operational practices, in particular the degree of involvement in the collection and use of data, and identifies obstacles and good practices in terms of influencing pro-gender equality public policies. Finally, the study draws up a list of strategic recommendations intended for observatories, supervisory bodies and technical and financial partners.
MSSRF Publication - November 2025 - Shared by Rajalakshmi
Ritu Dewan - EPW editorial comment on Labour Codes
Eniola Adeyemi Articles on Medium Journal, 2025
An analysis of the “soft life” conversation as it emerges on social media, unpacking how aspirations for ease and rest intersect with broader socio-economic structures, gendered labour expectations, and notions of dignity and justice
Tara Prasad Gnyawali Article - 2025
This article focused on the story of community living in a wildlife corridor that links India and Nepal, namely the Khata Corridor, which bridges Bardiya National Park of Nepal and Katarnia Wildlife Sanctuary of Uttar Pradesh, India.
This article revealed how the wildlife mobility in the corridor affects community livelihoods, mobility, and social inclusion, with a sense of differential impacts on farming and marginalised communities.
Lesedi Senamele Matlala - Recent Article in Evaluation Journal, 2025
UN Women has announced an opportunity for experienced creatives to join its global mission to advance gender equality and women’s empowerment.
The organization is recruiting a Multimedia Producer (Retainer Consultant) to support communication and advocacy under the EmPower: Women for Climate-Resilient Societies Programme.
This home-based, part-time consultancy is ideal for a seasoned multimedia professional who can translate complex ideas into visually compelling storytelling aligned with UN Women’s values.
Application Deadline: 28 November 2025
Job ID: 30286
Contract Duration: 1 year (approximately 200 working days)
Consultancy Type: Individual, home-based
this is a cross posting from http://hlanthorn.com/2015/11/26/thoughts-from-evalcon-on-evidence-u.... hope it is useful to folks who didn't make the think tank initiative's panel today in kathmandu.
i attended a great panel today, hosted by the think take initiative and idrc and featuring representatives from three of tti’s cohort of think tanks. this is part of the broader global evaluation week (#evalcon) happening in kathmandu and focused on building bridges: use of evaluation for decision making and policy influence. the notes on evidence-uptake largely come from the session while the notes on capacity building are my own musings inspired by the event.
.
one point early-on was to contrast evidence-informed decision-making with opinion-informed decision-making. i’ve usually heard the contrast painted as faith-based decision-making and think the opinion framing was useful. it also comes in handy for one of the key takeaways from the session, which is that maybe the point (and feasible goal) isn’t to do away with opinion-based decision-making but rather to make sure that opinions are increasingly shaped by rigorous evaluative evidence. or to be more bayesian about it, we want decision-makers to continuously update their priors about different issues, drawing on evidence.
.
this leads to a second point. in focusing on policy influence, we may become too focused on influencing very specific decision-makers for very specific decisions. this may lead us to lose sight of the broader goal of (re-)shaping the opinions of a wide variety of stakeholders and decision-makers, even if not linked to the immediate policy or program under evaluation. so, again, the frame of shaping opinions and aiming for decision-maker/power-center rather than policy-specific influence may lead to altered approaches, goals, and benchmarks.
.
a third point that echoed throughout the panel is that policy influence takes time. new ideas need time to sink in and percolate before opinions are re-shaped. secretary suman prasad sharma of nepal noted that from a decision-maker point of view, evaluations are better and more digestible when they aim to build bit by bit. participants invoked a building blocks metaphor several times and contrasted it with “big bang” results. a related and familiar point about the time and timing required for evaluation to change opinions and shape decisions is that planning for the next phase of the program cycle generally begins midway through current programming. if evaluation is to inform this next stage of planning, it requires the communication of interim results — or a more thoughtful shift of the program planning cycle relative to monitoring and evaluation funding cycles in general.
.
a general point that came up repeatedly was what constitutes a good versus a bad evaluation. this leads to a key capacity-building point: we need more “capacity-building” to help decision-makers recognize credible, rigorous evidence and to mediate between conflicting findings. way too often, in my view, capacity-building ends up being about how particular methods are carried out, rather than on the central task of identifying credible methodologies and weighting the findings accordingly (or on broader principles of causal inference). that is, capacity-building among decision-makers needs to (a) understand how they currently assess credibility (on a radical premise that capacity-building exercises might generate capacity on both sides) and (b) help them become better consumers, not producers, of evidence.
.
a point that surfaced continuously about how decision-makers assess evidence was about objectivity and neutrality. ‘bad evaluations’ are biased and opinionated; ‘good evaluations’ are objective. there is probably a much larger conversation to be had about parsing objectivity from independence and engagement as well as further assessment of how decision-makers assess neutrality and how evaluators might establish and signal their objectivity. as a musing: a particular method doesn’t guarantee neutrality, which can also be violated in shaping the questions, selecting the site and sample, and so on.
.
other characteristics of ‘good evaluation’ that came out included those that don’t confuse being critical with only being negative. findings about what is working are also appreciated. ‘bad evaluation’ assigns blame and accountability to particular stakeholders without looking through a nuanced view of the context and events (internal and external) during the evaluation. ‘good evaluation’ involves setting eval objectives up front. ‘good evaluation’ also places the findings in the context of other evidence on the same topic; this literature/evidence review work, especially when it does not focus on a single methodology or discipline (and, yes, i am particularly alluding to RCT authors that tend to only cite other RCTs, at the expense of sectoral evidence and simply other methodologies), is very helpful to a decision-making audience, as is helping to make sense of conflicting findings.
..
a final set of issues related to timing and transaction costs. a clear refrain throughout the panel is the importance of the timing of sharing the findings. this means paying attention to the budget-making cycle and sharing results at just the right moment. it means seeing windows of receptivity to evidence on particular topics, reframing the evidence accordingly, and sharing it with decision-makers and the media. it probably means learning a lot more from effective lobbyists. staying in tune with policy and media cycles in a given evaluation context is hugely time consuming. a point was made and is well-taken that the transaction costs of this kind of staying-in-tune for policy influence is quite high for researchers. perhaps goals for influence by the immediate researchers and evaluators should be more modest, at least when shaping a specific decision was not the explicit purpose of the eva....
.
one is to communicate the findings clearly to and to do necessary capacity-building with naturally sympathetic decision-makers (say, parliamentarians or bureaucrats with an expressed interest in x issue) to become champions to keep the discussion going within decision-making bodies. to reiterate, my view is that a priority for capacity-building efforts should focus on helping decision-makers become evidence champions and good communicators of specific evaluation and research findings. this is an indirect road to influence but an important one, leveraging the credibility of decision-makers with one another. two, also indirect, is to communicate the findings clearly to and to do necessary capacity-building with the types of (advocacy? think tank?) organizations whose job is to focus on the timing of budget meetings and shifting political priorities and local events to which the evidence can be brought to bear.
.
the happy closing point was that a little bit of passion in evaluation, even while trying to remain neutral and objective, does not hurt.
Add a Comment
THank you for describing so well what took place in the panel, as I could not attend the event. I wish I had been there. Next time....
Thanks for deeply reflecting on the presentation and sharing it back with us Heather. Thank you very much. Adding to if we also want to influence policy on using gender and equity focused evaluation. In my experience this would need continuous collective reflection and thinking through. There was a blog here in this community in which the evaluator said that he realised this when he became a father to a baby girl.
© 2026 Created by Rituu B Nanda.
Powered by
You need to be a member of Gender and Evaluation to add comments!
Join Gender and Evaluation