Evaluation of UN Women’s Work on the Care Economy in East and Southern Africa
Evaluation of UN Women's work on the Care Economy in East and Southern Africa - Evaluation Report
A regional study of gender equality observatories in West and Central Africa, carried out by Claudy Vouhé for UN Women
Sources: UN Women
This regional study offers an inventory and analysis of the legal framework of gender observatories, their attributions, functions and missions. It is based on exchanges with 21 countries, in particular the eleven countries that have created observatories. It compares the internal organisation and budgets of the observatories between countries, looks at operational practices, in particular the degree of involvement in the collection and use of data, and identifies obstacles and good practices in terms of influencing pro-gender equality public policies. Finally, the study draws up a list of strategic recommendations intended for observatories, supervisory bodies and technical and financial partners.
MSSRF Publication - November 2025 - Shared by Rajalakshmi
Ritu Dewan - EPW editorial comment on Labour Codes
Eniola Adeyemi Articles on Medium Journal, 2025
An analysis of the “soft life” conversation as it emerges on social media, unpacking how aspirations for ease and rest intersect with broader socio-economic structures, gendered labour expectations, and notions of dignity and justice
Tara Prasad Gnyawali Article - 2025
This article focused on the story of community living in a wildlife corridor that links India and Nepal, namely the Khata Corridor, which bridges Bardiya National Park of Nepal and Katarnia Wildlife Sanctuary of Uttar Pradesh, India.
This article revealed how the wildlife mobility in the corridor affects community livelihoods, mobility, and social inclusion, with a sense of differential impacts on farming and marginalised communities.
Lesedi Senamele Matlala - Recent Article in Evaluation Journal, 2025
UN Women has announced an opportunity for experienced creatives to join its global mission to advance gender equality and women’s empowerment.
The organization is recruiting a Multimedia Producer (Retainer Consultant) to support communication and advocacy under the EmPower: Women for Climate-Resilient Societies Programme.
This home-based, part-time consultancy is ideal for a seasoned multimedia professional who can translate complex ideas into visually compelling storytelling aligned with UN Women’s values.
Application Deadline: 28 November 2025
Job ID: 30286
Contract Duration: 1 year (approximately 200 working days)
Consultancy Type: Individual, home-based
As a mid-career professional- there are many job interviews for evaluator positions that I have been to where the first question often asked is on the difference between research and evaluation.
Research and Evaluation although often thought about in a dichotomy are not two distinct and mutually exclusive categories.
Research is a structured and systematic process of inquiry that aims to generate new knowledge, theories, or insights about a particular topic or field of study. Academic research involves a range of activities that are designed to investigate a specific question or problem, using rigorous methods to collect and analyse data.
Evaluation is a systematic process of assessing the effectiveness of a specific program, intervention, or policy. It involves collecting and analysing data to determine whether a program or intervention is achieving its intended outcomes, and identifying areas for improvement. The purpose of evaluation is to inform decision-making and improve program effectiveness.
Therefore, Research and Evaluation have different primary purposes where the purpose of research is to expand knowledge, test and validate theories, contribute to academic discourse, and provide evidence-based solutions to practical problems. The primary goal of evaluation is to assess program effectiveness, identify areas for improvement, inform decision-making, and demonstrate accountability but because research generates knowledge about how the world works and why it works as it does- the state of research knowledge affects what evaluation can contribute.
As research aims to expand knowledge by investigating more broad and generalised questions and phenomena the audience for research work could be more generalised researchers or scholarly audience while evaluation focuses on collecting data about a particular program or intervention, producing actionable results for stakeholders and decision-makers.
In terms of methodology, research and evaluation is similar in many foundational aspects such as the use of systematic approaches for collecting and analysing data, both use methods that may include surveys, interviews, experiments, and observation however they differ primarily in their purpose, flexibility, design, and the audience for their findings.
In terms of design, research is generally less flexible and follows predefined protocols, whereas evaluation is often adaptive and responsive to emerging needs or feedback[1]. Even in terms of sample selection, research often strives for broad or representative samples to generalize findings; evaluation typically uses purposive sampling focused on those affected by the program. This is because research needs to be publishable for a broader audience, however evaluation is mostly to respond to specific stakeholders and decision makers and need not always be designed for publishing.[2]
According to Cambridge Centre for Teaching and Learning, in research judgements are made by peers, standards for which include: validity, reliability, accuracy, causality, generalisability and rigour whereas in evaluation judgements are made by stakeholders, standards for which include utility, feasibility, perceived relevance, efficacy, fitness for purpose.
Research as a subset of evaluation or evaluation as a subset of research
The research question can be broad and exploratory in nature while evaluation questions are focused on assessing the success of a specific program or intervention. Therefore, one point of view calls evaluation as a subset of applied research. This school of thought is premised on the notion that – Doing research does not necessarily require doing evaluation. However, doing evaluation always requires doing research.
It uses research methods (qualitative, quantitative or mixed) but within the frame of assessing performance, outcomes or impact of a particular intervention. “Evaluation is applied research that aims to assess the worth of a service.” (Barker, Pistrang, & Elliott, 2016). “Program evaluation is applied research that asks practical questions and is performed in real-life situations.” (Hackbarth & Gall, 2005)
Another point of view believes research as a subset to evaluation. Mathison (2008) in her article on the distinctions between evaluation and research is that while evaluation derives its designs and methods from social science methodology but evaluation has gone much further in the types of designs and methods such as significant change technique, photovoice, cluster evaluation, evaluability assessment, and success case method. Scriven and Davidson have begun discussing evaluation-specific methodology (i.e., the methods distinct to evaluation), including needs and values assessment, merit determination methods (e.g., rubrics), importance weighting methodologies, evaluative synthesis methodologies, and value-for-money analysis (Davidson, 2013). ‘These methods show that, while we indeed incorporate social science methodology, we are more than that and have unique methods beyond that’ (Linnell, 2017). It is also said that if you are planning an evaluation, it can be useful to think of research as a subset of evaluation so that attention is paid to the processes of framing and managing an evaluation as well the specific research methods used to gather and analyse data apart from other tasks which may include clarifying or negotiating the primary intended users and their primary intended uses identifying key evaluation questions; clarifying or negotiating resources to answer the questions; supporting use of findings.[3]
Each of these framing can be useful for particular purposes.
Evaluation versus Research flashcards
The Evaluation Flash Cards developed by Michael Quinn Patton for the Otto Bremer Foundation includes the following flashcard on “Evaluation versus Research”[4]
Research |
Evaluation |
Purpose is testing theory and producing generalizable findings. |
Purpose is to determine the effectiveness of a specific program or model. |
Questions originate with scholars in a discipline. |
Questions originate with key stakeholders and primary intended users of evaluation findings. |
Quality and importance judged by peer review in a discipline. |
Quality and importance judged by those who will use the findings to take action and make decisions |
Ultimate test of value is contribution to knowledge. |
Ultimate test of value is usefulness to improve effectiveness. |
Source: Patton, Michael Quinn (2014). Evaluation Flash Cards: Embedding Evaluative Thinking
Therefore, to conclude a simplistic hourglass analogy is provided by John LaVelle who mentioned that the differences between research and evaluation are clear at the beginning and end of each process, but when it comes to the middle (methods and analysis), they are quite similar.[5]
This handout is prepared just to understand the nuances of research and evaluation and while many of us might be asked about the difference in job interviews and many other places, we have to make the point that they are not mutually exclusive.
[1] https://www.evalacademy.com/articles/research-and-evaluation-the-ar...
[2] https://www.cctl.cam.ac.uk/educational-enquiry/research-evaluation
[3] https://www.betterevaluation.org/blog/week-19-ways-framing-differen...
[4] https://www.betterevaluation.org/sites/default/files/2023-06/OBT_fl...
[5] https://www.danalinnell.com/blog/evaluation-is-not-applied-research
© 2025 Created by Rituu B Nanda.
Powered by
You need to be a member of Gender and Evaluation to add comments!
Join Gender and Evaluation