Monthly Corner

 IDH Publication, 2026

Gender-Based Violence (GBV) is not just a social issue, it’s a systemic challenge that undermines agricultural value chains.

In rural and isolated areas, GBV threatens women’s safety, limits their economic participation, and weakens food security. When women cannot work safely, entire communities lose resilience, and businesses lose productivity. Climate resilience strategies that overlook gendered risks leave communities exposed and women vulnerable.

Ending GBV is essential for building equitable, sustainable, and climate-resilient agri-food systems; and it’s not only a human rights imperative, but also central to climate adaptation and economic stability.

The good news? Solutions work. Programs like the Women’s Safety Accelerator Fund (WSAF) demonstrate that addressing GBV can enhance productivity and strengthen workforce morale and brand reputation. Safe, inclusive workplaces aren’t just good ethics, they’re smart business.

Gurmeet Kaur Articles

Luc Barriere-Constantin Article

 This article draws on the experience gained by The Constellation over the past 20 years. It is also a proposal for a new M&E and Learning framework to be adopted and adapted in future projects of all community-focused organisations.

Devaka K.C. Article

Sudeshna Sengupta Chapter in the book "Dialogues on Development edited by Prof Arash Faizli and Prof Amitabh Kundu."

Vacancies

  • We’re Hiring: National Evaluation Consultant – Bangladesh

UN Women is recruiting a National Evaluation Consultant (Bangladesh) to support the interim evaluation of the Joint Regional EmPower Programme (Phase II).

This is a great opportunity to work closely with the Evaluation Team Leader and contribute to generating credible, gender-responsive evidence that informs decision-making and strengthens programme impact.

📍 Location: Dhaka, Bangladesh (home-based with travel to project locations)
📅 Apply by: 24 February 2026, 5:00 PM
🔗 Apply here: https://lnkd.in/gar4ciRr

If you are passionate about feminist evaluation, gender equality, and rigorous evidence that drives change (or know someone who is) please apply or share within your networks.

  • Seeking Senior Analyst - IPE Global

About the job

IPE Global Ltd. is a multi-disciplinary development sector consulting firm offering a range of integrated, innovative and high-quality services across several sectors and practices. We offer end-to-end consulting and project implementation services in the areas of Social and Economic Empowerment, Education and Skill Development, Public Health, Nutrition, WASH, Urban and Infrastructure Development, Private Sector Development, among others.

Over the last 26 years, IPE Global has successfully implemented over 1,200 projects in more than 100 countries. The group is headquartered in New Delhi, India with five international offices in United Kingdom, Kenya, Ethiopia, Philippines and Bangladesh. We partner with multilateral, bilateral, governments, corporates and not-for-profit entities in anchoring development agenda for sustained and equitable growth. We strive to create an enabling environment for path-breaking social and policy reforms that contribute to sustainable development.

Role Overview

IPE Global is seeking a motivated Senior Analyst – Low Carbon Pathways to strengthen and grow its Climate Change and Sustainability practice. The role will contribute to business development, program management, research, and technical delivery across climate mitigation, carbon markets, and energy transition. This position provides exceptional exposure to global climate policy, finance, and technology, working with a team of high-performing professionals and in collaboration with donors, foundations, research institutions, and public agencies.

More Details Please go through

reflections from panel at #evalcon in kathmandu: evidence-uptake & capacity-building

this is a cross posting from http://hlanthorn.com/2015/11/26/thoughts-from-evalcon-on-evidence-u.... hope it is useful to folks who didn't make the think tank initiative's panel today in kathmandu.

i attended a great panel today, hosted by the think take initiative and idrc and featuring representatives from three of tti’s cohort of think tanks. this is part of the broader global evaluation week (#evalcon) happening in kathmandu and focused on building bridges: use of evaluation for decision making and policy influence. the notes on evidence-uptake largely come from the session while the notes on capacity building are my own musings inspired by the event.

.

one point early-on was to contrast evidence-informed decision-making with opinion-informed decision-making. i’ve usually heard the contrast painted as faith-based decision-making and think the opinion framing was useful. it also comes in handy for one of the key takeaways from the session, which is that maybe the point (and feasible goal) isn’t to do away with opinion-based decision-making but rather to make sure that opinions are increasingly shaped by rigorous evaluative evidence. or to be more bayesian about it, we want decision-makers to continuously update their priors about different issues, drawing on evidence.

.

this leads to a second point. in focusing on policy influence, we may become too focused on influencing very specific decision-makers for very specific decisions. this may lead us to lose sight of the broader goal of (re-)shaping the opinions of a wide variety of stakeholders and decision-makers, even if not linked to the immediate policy or program under evaluation. so, again, the frame of shaping opinions and aiming for decision-maker/power-center rather than policy-specific influence may lead to altered approaches, goals, and benchmarks.

.

a third point that echoed throughout the panel is that policy influence takes time. new ideas need time to sink in and percolate before opinions are re-shaped. secretary suman prasad sharma of nepal noted that from a decision-maker point of view, evaluations are better and more digestible when they aim to build bit by bit. participants invoked a building blocks metaphor several times and contrasted it with “big bang” results. a related and familiar point about the time and timing required for evaluation to change opinions and shape decisions is that planning for the next phase of the program cycle generally begins midway through current programming. if evaluation is to inform this next stage of planning, it requires the communication of interim results — or a more thoughtful shift of the program planning cycle relative to monitoring and evaluation funding cycles in general.

.

a general point that came up repeatedly was what constitutes a good versus a bad evaluation. this leads to a key capacity-building point: we need more “capacity-building” to help decision-makers recognize credible, rigorous evidence and to mediate between conflicting findings. way too often, in my view, capacity-building ends up being about how particular methods are carried out, rather than on the central task of identifying credible methodologies and weighting the findings accordingly (or on broader principles of causal inference). that is, capacity-building among decision-makers needs to (a) understand how they currently assess credibility (on a radical premise that capacity-building exercises might generate capacity on both sides) and (b) help them become better consumers, not producers, of evidence.

.

a point that surfaced continuously about how decision-makers assess evidence was about objectivity and neutrality. ‘bad evaluations’ are biased and opinionated; ‘good evaluations’ are objective. there is probably a much larger conversation to be had about parsing objectivity from independence and engagement as well as further assessment of how decision-makers assess neutrality and how evaluators might establish and signal their objectivity. as a musing: a particular method doesn’t guarantee neutrality, which can also be violated in shaping the questions, selecting the site and sample, and so on.

.

other characteristics of ‘good evaluation’ that came out included those that don’t confuse being critical with only being negative. findings about what is working are also appreciated. ‘bad evaluation’ assigns blame and accountability to particular stakeholders without looking through a nuanced view of the context and events (internal and external) during the evaluation. ‘good evaluation’ involves setting eval objectives up front. ‘good evaluation’ also places the findings in the context of other evidence on the same topic; this literature/evidence review work, especially when it does not focus on a single methodology or discipline (and, yes, i am particularly alluding to RCT authors that tend to only cite other RCTs, at the expense of sectoral evidence and simply other methodologies), is very helpful to a decision-making audience, as is helping to make sense of conflicting findings.

..

a final set of issues related to timing and transaction costs. a clear refrain throughout the panel is the importance of the timing of sharing the findings. this means paying attention to the budget-making cycle and sharing results at just the right moment. it means seeing windows of receptivity to evidence on particular topics, reframing the evidence accordingly, and sharing it with decision-makers and the media. it probably means learning a lot more from effective lobbyists. staying in tune with policy and media cycles in a given evaluation context is hugely time consuming. a point was made and is well-taken that the transaction costs of this kind of staying-in-tune for policy influence is quite high for researchers. perhaps goals for influence by the immediate researchers and evaluators should be more modest, at least when shaping a specific decision was not the explicit purpose of the eva....

.

one is to communicate the findings clearly to and to do necessary capacity-building with naturally sympathetic decision-makers (say, parliamentarians or bureaucrats with an expressed interest in x issue) to become champions to keep the discussion going within decision-making bodies. to reiterate, my view is that a priority for capacity-building efforts should focus on helping decision-makers become evidence champions and good communicators of specific evaluation and research findings. this is an indirect road to influence but an important one, leveraging the credibility of decision-makers with one another. two, also indirect, is to communicate the findings clearly to and to do necessary capacity-building with the types of (advocacy? think tank?) organizations whose job is to focus on the timing of budget meetings and shifting political priorities and local events to which the evidence can be brought to bear.

.

the happy closing point was that a little bit of passion in evaluation, even while trying to remain neutral and objective, does not hurt.

Views: 238

Add a Comment

You need to be a member of Gender and Evaluation to add comments!

Join Gender and Evaluation

Comment by Liisa Horelli on December 14, 2015 at 18:47

THank you for describing so well what took place in the panel, as I could not attend the event. I wish I had been there. Next time....

 

Comment by Rituu B Nanda on December 14, 2015 at 13:01

Thanks for deeply reflecting on the presentation and sharing it back with us Heather. Thank you very much. Adding to if we also want to influence policy on using gender and equity focused evaluation. In my experience this would need continuous collective reflection and thinking through. There was a blog here in this community in which the evaluator said that he realised this when he became a father to a baby girl.

© 2026   Created by Rituu B Nanda.   Powered by

Badges  |  Report an Issue  |  Terms of Service