Dear colleagues,

Following up on some individual exchanges I have had recently with colleagues regarding the quality of RFP & ToR for evaluation in “development”, some of my takes:
If it doesn’t value it is not an evaluation: The extent to which something has been achieved is a measurement question, not a valuing question. Checking to see whether intended results have been achieved, or not, unintended positive, negative, etc. does not make an evaluation. Evaluative reasoning is required to value.
Purpose: The specific value proposition of the specific evaluation to the specific claim/rights holders based on their value perspectives. To say the purpose of the evaluation is learning and accountability is meaningless (and applicable to thousands of evaluations)
Approach: The valuing frames that evaluative reasoning will consider. Participatory is not an approach. 
Methodology: How the evaluation proposes to fulfill its value proposition, considering context, time and resources.
The OECD-DAC “criteria”, as are all other off-the-shelf one-size-fits-all operationalized value perspectives, are the expression of a specific valuing frame that may not be appropriate to valuing the evaluand at hand, i.e. it is a production process framing. This is especially the case in “donor” financed RFP where they are routinely used without due consideration.
If an RFP&TOR specify the methodology it is a task based assignment, i.e. it is a matter of doing what you’re told and paid for. In those cases, which are a majority, the independence and autonomy of the evaluation is compromised from the start, as are the claims and rights of holders.
RFP should include the proposed budget in all cases. 
A good question to ask at the start, for both commissioners and interested parties: what is it that this proposed evaluation will do that a performance audit can’t do?
Cheers,
Ian

Ian C. Davies
Credentialed Evaluator/Évaluateur Qualifié
idavies@capacity.ca 
Mobile Europe: +33 (0) 6 89 40 88 38
Office: +1 (250) 920-0656 ext 232
Mobile Canada: +1(778) 967-1279
Skype: iancdavies

Views: 80

Add a Comment

You need to be a member of Gender and Evaluation to add comments!

Join Gender and Evaluation

Comment by Rituu B Nanda on August 28, 2020 at 19:48

I am including here some terms you described in today's SLEVA webinar.  Thanks a lot Ian.

  • Monitoring is a management function: the responsibility and obligation to measure, assess and report on the performance of the intervention and/or entity.
  • Audit: a process superimposed on an accountability relationship to provide assurance (financial, performance, compliance with authorities).
  • Evaluation: a systematic & inclusive process of valuing based on the value perspectives of the claims / rights holders.
  • The term (and concept) of rigour is rooted in the natural sciences and refers to the ability to precisely replicate the research protocol, e.g. the experiment. Rigour in the social sciences is not a term of art. Rather it is used by different individuals or groups, in different ways, including for marketing and self-promotion. As such, it is a loaded term, the use of which in evaluation tends more to obfuscate than to bring clarity and promote understanding
  • mixed methods- the original intent of this term (which I got directly from its coiner Carole Weiss years ago) is mixed constructs, i.e. multiple perspectives on valuing constructs

© 2024   Created by Rituu B Nanda.   Powered by

Badges  |  Report an Issue  |  Terms of Service