The trouble with evaluation (1-Data)

Invitation to help improve evaluation methods.

Dear Members and Colleagues,

A project, funded by The Health Foundation (UK) is comparing evaluation guidance with the lived experiences of evaluators. In current guidance the key assumption is that forward planning can prevent problems. However, in the Real-World, despite good intentions and evaluation expertise, problems happen and there is little guidance about how to fix these problems. This seems to be particularly relevant to Quality Improvement (QI) projects.

Our response to this is to “crowdsource” a knowledge base of experience and advice about how to improve the integration of evaluation and QI; we invite anyone with experience of evaluation to complete an initial survey, which takes one of the key themes – that of data access, collection and analysis – as its starting point. We will be releasing further surveys on other topics over the next few weeks. To access the survey, click here.

Or copy and paste this link into your browser: https://scharr.eu.qualtrics.com/jfe/form/SV_abiDRf2XrOvk7vU

Further details about the project are available below.

Current Assumptions

It is often thought that the key to delivering a successful evaluation of any intervention lies in producing an appropriate and detailed evaluation plan, and then enacting that plan with adequate time and resources, and with the full collaboration of the intervention team and other stakeholders. Consequently, most guidance focuses on the elements of an evaluation that should be considered in advance in order to develop, design and schedule an evaluation plan that, in appearance at least, will meet the needs of the stakeholders.

The Problem

Of course, in practice, and even with the best possible design in place, evaluation projects do not always follow a smooth course. It is impossible to anticipate all the problems that may arise. These can include; the practical difficulties of accessing data or a lack of resources at key moments, changes to the intervention itself, to say nothing of external factors such as policy changes and global pandemics. These can all serve to throw plans off course and to undermine evaluation activities. In the case where evaluations do go wrong, or at least not according to plan, the existing guides offer little in the way of practical advice about what to do to rectify matters and rescue the evaluation process – if, indeed, it can be rescued.

Evaluation of quality improvement (QI) projects seems particularly susceptible to these problems since the latter are often complex and changeable interventions in open systems that can have unanticipated emergent characteristics. There are a number of areas of tension that can potentially derail the evaluation of QI projects. For instance, differing priorities mean that QI projects will often place the implementation of change above the requirements for evaluation of that change: for good or for ill, change becomes an end in itself.

The Project So Far

These conclusions are among the findings of the Connecting to integrate Quality Improvement and evaluation practice project being carried out by a team led by researchers from the Universities of Sheffield and Bath and supported through the Connecting Q programme of The Health Foundation. This project has undertaken a series of evidence reviews and consultations with QI practitioners and evaluators with the aim of bridging the divide between evaluation theory and QI practice. One thing that has emerged is the lack of practical guidance both for recognising the practical problems that can arise and for indicating how to respond when serious problems do occur while the evaluation is underway. But also apparent is the breadth of the practical experience that exists in the QI and evaluation communities, and the potential of this, if properly harnessed, to provide an invaluable shared resource for improving evaluation processes.

The Plan

To create such a resource, we first need to understand the nature of the problems that do arise. Our review of the published literature about the evaluation of QI projects (in both the UK and abroad) suggests that problems can be clustered around a relatively small set of themes or dimensions of the evaluation:

  • Data access, collection and analysis
  • The purpose and parameters of evaluation
  • Stakeholder management and engagement
  • Evaluation approaches and methods
  • Resources, knowledge and skills
  • Timing and timeliness
  • Culture and context

Note that certain problems can be related to more than one of these themes, which is unsurprising given the complexity of most evaluation efforts (as well as the somewhat artificial nature of any attempt to classify them into neat categories). Nonetheless, these themes give us a scaffold for talking about the problems that do occur.

While not all the problems will have easy solutions, we believe that the collected experience of evaluation and QI practitioners offers a pool of practical knowledge for both recognising problems and suggesting solutions. As a first step we would like to try to “crowdsource” a knowledge base of experience and advice about how to improve the integration of evaluation and QI; we invite anyone with any experience of the evaluation of QI or other interventions to complete an initial questionnaire which takes one of these themes – that of data access, collection and analysis – as its starting point.

You will be asked to provide your particular experiences of tackling problems that have arisen relating to this theme. You will also be able to share your opinions on the approach that we’re taking. To access the questionnaire, click here. We look forward to receiving your responses!

Views: 157

Add a Comment

You need to be a member of Gender and Evaluation to add comments!

Join Gender and Evaluation

Comment by Steven Ariss on May 25, 2022 at 0:46

Hi Rituu,

great to hear from you! I hope you are well.

The use of participatory processes sounds like an interesting thing to consider.

We are in a process of iteration at the moment, where we are slowly developing tentative findings and refining and testing them with key stakeholders. As well as the online questionnaires, we are planning a workshop at the HSR UK 2022 conference, which will be our next milestone.

We should have some firmer findings to share after this. It would be great to have a chat about the project, as I'm sure you wil have some great ideas.

Comment by Rituu B Nanda on May 25, 2022 at 0:06

Hi Steven, will you be sharing the results of your study. I facilitate participatory processes, have you thought of that ? Thanks and warm greetings!

© 2022   Created by Rituu B Nanda.   Powered by

Badges  |  Report an Issue  |  Terms of Service