Participation in EES, 2014 was an excellent opportunity to rethink assumptions, meet friends again and see bits of Dublin.
What did I learn from the four days I spent at EES? I learnt from 'systems-thinkers' the distinction between doing things right and doing the right things. If one has the wrong theory of change but a plan of action, one can land up doing planned things right, but not the right thing to address the issue that one wants to address be it poverty, HIV/AIDS, equity etc. At the same time there is a need for reflection on what is "right" which itself can be contested Looking back it is more easy in evaluations to assess whether things planned are being rightly implemented, than ask "are right things being done?" and "whose views are reflected in determining what is right?" (Hummelbrunner, 2014)
Yet another lesson is the need for rooting our evaluations in theories. While theories on evaluation are becoming popular, the panel on 'what we can learn from classics' made a brilliant case that we root our evaluation work in theories of sociology, political science, economics etc. (including Karl Marx!) In fact, multidisciplinary exploration of years gone by contrasts with the current tendency to specialise. However, there was nobody making the case of delving into what Simone de Beauvoir and other feminists of that time wrote.
The panel on real time evaluations was brilliant and pointed to how agencies and individuals can do rapid assessments (3-4) in humanitarian situations and plan forward. A detailed evaluation does not make sense as the situation is changing rapidly. The importance of meeting diverse stakeholders (including men and women from affected community) during real time assessments was emphasized, but not involving primary stakeholders in planning ahead. In-fact accountability to marginalised groups in evaluations posed a challenge in almost all the sessions/plenaries/workshops that I atteended.
While one tends to look at evaluations at micro-level, there were others evaluating European regional cohesion program. The indicators and the evaluation challenges were different. For example, data on cohesion was not always available at the regional level. Contribution is more difficult to assess. Yet, as women's group when we launch regional campaigns on women's rights, minority rights or LGBT rights we need to learn how to assess regional advocacy and programs.
I was taught that evaluation is at the end of the project period. The workshop on evaluability assessments point to the need to examine right after design if the theory of change is evaluable, if data would be generated through the proposed MIS to evaluate the project and so on. Evaluability of projects and programmes is important, and this has to be factored from the beginning.
On the whole, the EES, 2014 committee not only 'did things right', but also 'did the right things'. Nevertheless a lesson is not to have gender exclusive panels but integrate gender specific presentations into mainstream. Greater effort also needs to be made to bring developing country voices into large plenaries - efforts from developed and developing countries are essential. It would then perhaps not be European, but perhaps time is right for Global Evaluation Society!
Hummelbrunner, R, 2014, Systems Thinking, Learning and Values in Evaluation (ppt), Presented at the EES, 2014 in Dublin on October 2nd, 2014
Add a Comment