I recently had the opportunity to work with OMG Center for Collaborative Learning in Philadelphia, Pennsylvania, in the United States. OMG's mission is to "accelerate and deepen social impact through strategy, evaluation, and capacity building." 

Below is a link to an interview I did with OMG about Cultural Responsiveness in Evaluation. During the conversation we discussed several issues including: (1) Culturally Responsive Evaluation: A Definition; (2) Evaluating Data through a Culturally Responsive Lens; (3) Culturally Responsive Evaluation: Examples in Practice; (4) Culturally Responsive Evaluation: Potential Challenges; (5) Culturally Responsive Evaluation: Advice to Practitioners; and (6) The Importance of Culture to Evaluation.

I hope that you will enjoy these videos ,and that you will provide commentary and ideas about your own experiences in evaluation. I look forward to the discussion!

Interview with Dr. Katrina Bledsoe - Part 1 from OMG Center on Vimeo.

The link: http://vimeo.com/channels/omgcre

Interview with Dr. Katrina Bledsoe - Part 2 from OMG Center on Vimeo.

Views: 2011

Add a Comment

You need to be a member of Gender and Evaluation to add comments!

Join Gender and Evaluation

Comment by Bhabatosh Nath on May 28, 2014 at 0:28

Dear Katrina,

It is so nice of you to go through the comments and to share your experiences and thoughts so rationally. Thanks are due to you for your viewpoints. Yes, you are very right that continuity of our conversation will help us to continue to progress onward.....

Let me start discussion from the end of your detailed comments. I like to highlight your remark "evaluation is a reflection of the program". I like to add more that evaluation is a reflection of the program, project and the organization itself. So, to make the 'evaluation' 'treasured' to the Clients and to the Evaluators what exactly to be done.......I think now time has come to organize our thinking, mobilize the stakeholders including the donors and to make all of us 'Responsive' on evaluation process in our own field.

(i) Emerging precise methodologies, (ii) developing pertinent tools, (iii) engaging team(s) in conducting evaluation at field, (iv) analyzing collected data/information exactly in relation to program/project/organizational goal and outcomes, (v) writing quality report, (vi) sharing evaluation findings, and finally (vii) ‘using’ evaluation report are so inter-related, we the evaluators and all other related stakeholders should realize the value and should ‘own’ each of the steps of evaluation to be accomplished. Thanks for sharing your experience on your work in a local high school on a health-focused project. It is very encouraging for others that finally you have succeeded to involve the students in the re-designing of your surveys and creating scope to raise their voices in the big group. That’s the way of meeting challenges and doing work as per evaluator’s plan to make the evaluation meaningful and ‘responsive’.

As you have mentioned that your colleague and mentor Donna Mertens uses the ‘transformative approach’ in evaluation, I do agree that it is a very good approach, but at the same time it is also not very easy to implement, especially to involve the stakeholders with their full support and sincere contribution in this process. However, I also try to follow this approach and in some cases I have succeeded….but it has taken lots of time than the clients /related program implementing organizations /donors fixed the ‘duration’ of evaluator in the ToR. In this case I should say that ‘Time’ and delivering ‘Quality Product’ is two crucial factors for the evaluators.  

In this respect, I have an issue to raise here regarding the “ToR” developed by donors/ implementing organizations. Have you ever been compared the objectives, tasks of the ToR with the set goals and outcomes, outputs of the project /program (to be evaluated)? Have you seen any differences, any discrepancies?  As evaluators, do we just follow what the ToR says or we also have our ‘say’ on the ToR to make it relevant to ensure ‘responsive evaluation’. I do also believe that we as evaluators also should have requisite expertise to challenge the ToR and thus to share our ideas to finalize it. ……but I have doubt whether our role to do this would be valued and acknowledged by the clients/ authorities …..let’s share our experiences!

Katrina, I raised many issues in this discussion. I don’t know whether all these are relevant with what exactly you like to disclose in the evaluation field. Please share your opinion.


   

Comment by Katrina L. Bledsoe on May 27, 2014 at 20:34

Dear Bhabatosh,

Thank you for your post on May 8th, I enjoyed reading your thoughts. I have been on work travel so I have just now had time to sit down to add to the conversation.

Bhabatosh, I appreciate your comments, they challenge me to continually think and do that necessary self-assessment work. It is absolutely difficult to conduct collaborative and partnership-focused evaluation. Many “collaborations” and/or “partnerships” never make it past the proposal or kick-off meeting stages. When this happens I usually find that I’ve had to make some choices and I end up asking myself questions such as, what is the greater good? Is it worth me continuing on, even if the level of participation that should be there is not? I know that the trend is to espouse participatory approaches but sometimes using a participatory approach is not possible. What then? Is it a bad evaluation if everyone is not at the table? For me this is a contextual issue and in some ways a personal values choice. Will the results help? Can the work inform the community at large? How can I insert the voices of those who are left out of the conversation? Is there a way I can be a conduit? Can I provide information that might alter the context for the better? These questions and others usually guide my thinking in every evaluation, but especially for those that don’t include the voices of those who would be most effected by the findings and subsequent decisions.

I remember my work with a local high school on a health-focused project. The teaching staff and administration had developed programming based upon their values. Having had sideline conversations with students about their views and thoughts, I really tried to have them become a prominent part of the leadership team. This of course was unheard of and not accepted. But I knew that without those consumer/beneficiary voices, the students would be further stereotyped. My solution at the time was to develop an advisory board comprised solely of students. The meetings provided me a way to get those voices heard at the larger decision-making table. Was there resistance? Absolutely. Because students’ were contrary to what the leadership constituency wanted to hear. In the end, however, the team realized that suppressing those voices would not be advantageous especially since they had expressed feeling “love and concern” for the students’ well being.

My team and I also used this method to our advantage in having the students involved in the re-designing of our surveys. We asked them to review our work and those students who were especially interested in the work also became evaluation team members in ensuring these were distributed and the data collected. It is one of my finer moments in ensuring beneficiary participation on the evaluation teamJ. But as we know, this is not always easy and in many cases cannot be guaranteed. Additionally, what is a successful collaboration in one context may not be in another.

Bhabatosh, you mention that sometimes having the program/management staff of a program/organization involved in the evaluative process can be challenging. It certainly can push one to ask the question, “How much responsiveness is too responsive,” especially when it seems to “interfere with” or “hamper” the evaluative process.  I am struggling with that now on a project where key players seek to be part of the evaluation team but sometimes to make it challenging to do a good evaluation. I admit that I sometimes get frustrated when discussions about reliability and validity of instruments are dismissed and items are changed that make it more difficult for beneficiaries and consumers to answer. My response has been to really dig and find out what is most valuable to those constituents, to remind them that we really need to acknowledge those voices that provide information in our planning. Sometimes this works, other times it is ignored because constituents sometimes feel as if they know what’s best for the consumers and beneficiaries (or the field, etc.). This latter conversation is less about the evaluation and more about ways of thinking and being. And that is a larger more complex discussion about the human condition in general.

That said, I admit that I have had to have a self-assessment conversation with myself. I considered taking myself off of the evaluation because I was frustrated. But I think about my colleague and mentor Donna Mertens, and her work using the transformative approach in evaluation. She often says that the work is not easy. For her the transformative and CRE approaches are stances and ways of viewing the evaluative process and the world in general. The work takes patience and as Donna says, really allows one to truly admit what they can tolerate in an evaluation and what they cannot. I try my best to work with difficult staff because having their voices is still crucial to me, even if those voices are dissenting or make the evaluation a bit more difficult. For me, those voices hold value because usually, the evaluation is a reflection of the program. Of course value is in eye of the beholderJ.

Bhabatosh, I want to thank you for your comments, these continue the conversation and help us to continue to progress onward even in the most difficult of situations.

 

Comment by Bhabatosh Nath on May 8, 2014 at 1:19

Dear Rituu,
Thank you so much for sharing the VIDEO of Dr. Katrina and your queries.
"Culturally Responsive Evaluation" is very promising and carries a great value in evaluation field. Now, how many of us (as evaluators) truly 'own' our responsibilities in doing evaluation. How many of us doing evaluation 'independently' and how many of us have to carry our jobs in accordance with the 'willingness' of the so called 'Clients'. To ensure responsiveness, 'participatory approach' is a precondition. But in actuality how do we ensure participation of all related stakeholders? What about the grassroots level beneficiaries and staff at field? How they are involved in designing evaluation methodologies, implementing study at field, sharing evaluation report, and finally what happened in case of 'use' of evaluation report. I have lots of experiences that the donors and implementing organizations in the name of 'participation' develop ToR and mention about to develop 'Teams' like 'project management team', 'advisory team', 'technical team' to work with the external evaluation team. However, practically when we start work at field, existence of those teams and role of team members remain almost 'zero'.

Using GIS in the evaluation is a very effective method to make the readers understand about the status in an easiest way. Thanks to Dr. Katrina to put importance on it.

One vital issue is to discuss with Dr. Katrina regarding involvement of the field staff/ management staff of a particular project/ organization with the external evaluation team. Could you please share your experience on involvement of these staff and their participation in the evaluation process? Do they 'influence' the external evaluation team or it is the responsibility of evaluation team how to use the expertise of those staff to facilitate the evaluation work in order to accomplish the jobs effectively. As per my field experience, in many times I feel it is okay to take them with us and we certainly know the ways of collecting data/ information even in presence of those staff. But sometimes it creates problem when the staff have the mentality of 'monitoring'/ 'supervising' the evaluation work. That really creates annoyances and brings forward a big question on 'responsiveness' of evaluation. Please share your ideas.

Comment by Katrina L. Bledsoe on May 2, 2014 at 21:04

Thanks to Rituu, Ujwala, and Madhumita for posting such great comments and questions. Rituu, I'll start with your comments first. I think the evaluator should be self-assessing in any evaluation, not just a culturally responsive one. That said, what I mean about self-assessment is to understand your own values, upbringing, thoughts and opinions (to name a few) about the world, and understand how those can influence how you do the evaluation. For instance, people have biases. We have ideas about how the world should run, who should run it, etc., and those ideas can impinge upon how we go about doing the evaluation, the questions that we choose to ask, etc. I think back to when I did project on obesity and the measures we used to assess ethnic identity. It didn't dawn on us that the measures were inappropriate for that region. We used them because that's the way we personally viewed ethnic identity at the time. It turned out to be quite disastrous. We had to back track, come to grips with our views and start talking to people to find out what ethnic identity meant in that setting. And I think that's hard for evaluators to admit; really it's hard to admit for anyone to admit that what they think, how they think about it can influence the questions that are asked, the measures chosen, etc. When I say self-assessing, I mean doing that hard internal work that we don't always have time to do, asking, “Why am I doing the evaluation?” “What is my agenda?” “Is it something I believe in?” “What do I think of the people whose lives this evaluation might influence?”

As far as your second question, I recently worked with the project in North Carolina. We developed a collaborative team that considered the evaluation as the first part of their whole parental engagement agenda. The team included parents, school administrators and former teachers, community members, and university staff. As a team we worked together to determine the issues we really wanted to address.  We determined the evaluation questions and we worked together in trying to identify participants. When time came to write the report, we all worked on it together. I did a lot of the "boots on the ground" work, but the conceptualization and interpretation was conducted as a team.  When the time came to present to the school superintendent, it may have been me presenting but the team was in the room.

I gravitate to participatory approaches (but I use whatever evaluation approach is most appropriate and gets the job done) because I'm a participatory person in general. I'm that way in my every day life; that's who I am and I know it. I also try to be aware when the participatory approach doesn't work for an evaluation. In that case I either a) put that aside and not use that approach, b) bring someone else in who might have a better sense (this gets to Ujwala's point about bringing in local experts), or c) back off the evaluation. But you have to know yourself to be able to make good decisions about whether or not you can conduct a "good" evaluation. Again, that just makes good sense for any evaluation.

Ujawala, your point about “baggage” hits the nail on the head. We all have it and it can influence the way we think about an evaluation sometimes for the good of one, sometimes for the not so good. Again, it gets to the point of being honest about that baggage and keeping it in check.

Madhumita, I agree with your point that culture should be a part of any evaluation. When I say culturally responsive, I mean culture broadly, not just ethnic culture. In every situation there is a culture so it has to be acknowledged and addressed. For instance, I live in the Washington, DC metropolitan area in the US. I used to take the metro train going to work every day. There is a culture that commuters adhere to. For instance, during the work week if you are not in a hurry, you stand to the right on the escalator. The left side is for walking up or down the escalator stairs. When you get to the train, you know not to stand in front of the doors. If you take the same trains all the time, you see the same people and there is a little culture about what door you stand at, who sits/stands where on the train. Understanding the culture of the train commuter is crucial to being able to, a) get on a train, b) get help when you need it, and c) get to work or your destination on time.

Thanks to you all for such great comments. I am thrilled to be a part of a community doing such awesome work, and making the world we live in a better place!

Comment by Rituu B Nanda on May 2, 2014 at 15:47

Dear Katrina,

I found your videos very useful- use out of box methods in evaluation, the example of using GIS was very interesting. I have two questions- please elaborate on the Evaluator has to be self assessing with an example. Second question- you say higher degree of participation in culturally responsive evaluation is required, please share an example where you did it.

Thanks Katrina. I enjoyed your videos very much. 

Comment by Rituu B Nanda on May 1, 2014 at 8:29

Here is a response from Monitoring and Evaluation professionals group on Linkedin. Thanks to Ujawala Samant for this sharing.

Ujwala

Ujwala Samant

Director, Programs and Services at Food Bank of South Jersey

Absolutely! As ED of Learning for Life UK, I tried to avoid sending experts from the UK to conduct evaluations in South Asia. The baggage they bring can often be counter productive to the evaluative process. I consistently vetted local experts, who were not linked to the project, but who could give an outside-inside perspective and analysis which was more relevant than having a different cultural context for data analysis hovering in the background.

Comment by madhumita sarkar on April 30, 2014 at 22:47
Culturally responsive / sensitive monitoring and evaluation is extremely important even in humanitarian crisis situation. One cannot assume that one size fits all. Culture is an integral aspect of a social being and rightly pointed out by Dr. Bledsoe, culture is based on values. Culture must be an integral component of all evaluations to ensure the evaluation is not biased.

© 2024   Created by Rituu B Nanda.   Powered by

Badges  |  Report an Issue  |  Terms of Service