Hi Gender and Eval Community!

Over on my blog, free-range evaluation, I recently shared some thoughts on how my colleagues and I have been working to support and promote "evaluative thinking," especially among "non-evaluators" (i.e., program implementers who don't see themselves as evaluators, or who maybe even dislike evaluation). 

Here, I share some of those thoughts, in the hope of learning from you all what you do to foster evaluative thinking with the people with whom you interact.

I was inspired by a recent post on the Stanford Social Innovation Review blog that mentioned the importance of evaluative thinking. The post, “How Evaluation Can Strengthen Communities,” is by Kien Lee and David Chavis, principal associates with Community Science.

They describe how—in their organization’s efforts to build healthy, just, and equitable communities—supporting evaluative thinking can provide “the opportunity for establishing shared understanding, developing relationships, transforming disagreements and conflicts, engaging in mutual learning, and working together toward a common goal—all ingredients for creating a sense of community.” Along with Jane Buckley and Guy Sharrock, in our work to promote evaluative thinking in Catholic Relief Services and other community development organizations, we have definitely seen this happen as well.

But how does one support evaluative thinking? On aea365 and in an earlier post here, we share some guiding principles we have developed for promoting evaluative thinking. Below, I briefly introduce a few practices and activities we have found to be successful in supporting evaluative thinking (ET). Before I do that, though, I must first give thanks and credit to both the Cornell Office of Research on Evaluation, whose Systems Evaluation Protocol guides the approach to articulating theories of change which has been instrumental in our ET work, and to Stephen Brookfield, whose work on critical reflection and teaching for critical thinking has opened up new worlds of ET potential for us and the organizations with which we work! Now, on to the practices and activities:

  • Create an intentional ET learning environment
    • Display logic models or other theory of change diagrams in the workplace—in meeting rooms, within newsletters, etc.
    • Create public spaces to record and display questions and assumptions.
    • Post inspirational questions, such as, “How do we know what we think we know?” (as suggested by Michael Patton here).
    • Highlight the learning that comes from successful programs and evaluations and also from “failures” or dead ends.
  • Establish a habit of scheduling meeting time focused on ET practice
    • Have participants “mine” their logic model for information about assumptions and how to focus evaluation work (for example, by categorizing outcomes according to stakeholder priorities) (Trochim et al., 2012).
    • Use “opening questions” to start an ET discussion, such as, “How can we check these assumptions out for accuracy and validity?” (Brookfield, 2012, p. 195); “What ‘plausible alternative explanations’ are there for this finding?” (see Shadish, Cook, & Campbell, 2002, p. 6).
    • Engage in critical debate on a neutral topic.
    • Conduct a media critique (critically review and identify assumptions in a published article, advertisement, etc.) (an activity introduced to us by evaluation capacity building pioneer Ellen Taylor-Powell).
  • Use role-play when planning evaluation work
    • Conduct a scenario analysis (have individuals or groups analyze and identify assumptions embedded in a written description of a fictional scenario) (Brookfield, 2012).
    • Take on various stakeholder perspectives using the “thinking hats” method in which participants are asked to role play as a particular stakeholder (De Bono, 1999).
    • Conduct an evaluation simulation (simulate data collection and analysis for your intended evaluation strategy).
  • Diagram or illustrate thinking with colleagues
    • Have teams or groups create logic and pathway models (theory of change diagrams or causal loop diagrams) together (Trochim et al., 2012).
    • Diagram the program’s history.
    • Create a system, context and/or organization diagram.
  • Engage in supportive, critical peer review
    • Review peer logic models (help identify leaps in logic, assumptions, strengths in their theory of change, etc.).
    • Use the Critical Conversation Protocol (a structured approach to critically reviewing a peer’s work through discussion) (Brookfield, 2012).
    • Take an appreciative pause (stop to point out the positive contributions, and have individuals thank each other for specific ideas, perspectives or helpful support) (Brookfield, 2012).
  • Engage in evaluation
    • Ensure that all evaluation work is participatory and that members of the organization at all levels are offered the opportunity to contribute their perspectives.
    • Encourage members of the organization to engage in informal, self-guided evaluation work.
    • Access tools and resources necessary to support all formal and informal evaluation efforts (including the support of external evaluators, ECB professionals, data analyzers, etc.).

What other techniques and practices have you used to promote and support evaluative thinking?

A theory of change 'pathway model' from CRS Zambia, helping practitioners to identify and critically reflect on assumptions.

Note: The ideas above are presented in greater detail in a recent article in the American Journal of Evaluation:

Buckley, J., Archibald, T., Hargraves, M., & Trochim, W. M. (2015). Defining and teaching evaluative thinking: Insights from research o...American Journal of Evaluation. Advance online publication. doi:10.1177/1098214015581706

----------------------------

References:

Brookfield, S. (2012). Teaching for critical thinking: Tools and techniques to help students question their assumptions. San Francisco, CA: Jossey-Bass.

De Bono, E. (1999). Six thinking hats. London: Penguin.

Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

Trochim, W., Urban, J. B., Hargraves, M., Hebbard, C., Buckley, J., Archibald, T., Johnson, M., & Burgermaster, M. (2012). The Guide to the Systems Evaluation Protocol (V2.2). Ithaca NY. Retrieved from https://core.human.cornell.edu/research/systems/protocol/index.cfm.

Views: 238

Add a Comment

You need to be a member of Gender and Evaluation to add comments!

Join Gender and Evaluation

Comment by Md. Safiur Rahman on June 25, 2015 at 9:20

Dear Tom, Thanks indeed for your nice effort. Really Excellent.

Comment by Albie Colvin on June 25, 2015 at 3:02

Thanks for sharing Tom. Lots of great ideas and links to really useful information. Much appreciated!

Comment by Rituu B Nanda on June 24, 2015 at 21:46

Thanks for this valuable sharing, Tom. I love evaluation for the reflection and subsequent learning. Participatory action research is one tool I have found very helpful to promote reflection and action. I love it!

Comment by Rajib Nandi on June 24, 2015 at 15:25

Innovative ideas to share and promote "evaluative thinking" among "non-evaluators". Looking forward to read the detailed article in the American Journal of Evaluation.

© 2024   Created by Rituu B Nanda.   Powered by

Badges  |  Report an Issue  |  Terms of Service