Note: This blog post originally appeared on the EvalCommunity website at this link. With many thanks to the EvalCommunity team for choosing my blog for publication! I loved their formatting and how they highlighted pieces of the post.
Objectivity is a mirage: when evaluators enter a project or situation, they change it. Some of that change is subtle and/or not terribly meaningful. Some is intentional, and some is substantial. The effect radiates out from the evaluation team, but most of the effect is never accounted for, in the rush to comply with donor requirements and finish the evaluation.
Some people might be put off by being evaluated, or (they way they might see it) judged. They may feel compelled to put their best foot forward and hide aspects that are less than they’d like. Some will feel interrupted – that the evaluation wastes their time, slowing progress.
As evaluators, we can pretend evaluator effects don’t exist or don’t matter, we can try to minimize them, or we can capitalize on them. We probably do a combination of these things already, whether we mean to or not.
Evaluation is an integral part of international cooperation. This, despite the elephant in the room: that the current U.S. administration demolished a significant swath of these projects earlier this year; some European agencies have also slashed budgets substantially. International cooperation still exists, though massively reduced, and evaluation continues to be an important tool to improve these projects where they yet live. In fact, evaluation can make best use of drastically reduced resources.

Photo by saira ahmed on Unsplash
Evaluator effects with project staff
Project teams prepare to be evaluated. They collect data, they document their work, they collect success stories, they highlight challenges they’ve handled and the effects on outcomes. They came to their projects wanting to do good – people don’t work on this stuff for the prestige or the salary – so systematic data collection is no surprise.
But unexamined evaluator effects are likelier to be negative for staff. Their initial reactions to “The Evaluators Are Coming!” can be fear, concern, or resistance. Anyone would feel this way: being judged can put us on the defensive. The first thing evaluators can do to alleviate their concerns (and improve the data we collect) is to build the evaluation with their participation, rather than landing on them like Martians with clipboards. If team and component leads can have some input into the evaluation questions, for example, they’re much more likely to buy in to the process and the results – because they themselves want the answers the evaluation promises to find.
When fieldwork begins, it’s the evaluation team’s responsibility to put the staff at ease. Building rapport means speaking to them like human beings, not reading questions like automatons. It means being flexible with logistics, following up on concerns they express, respecting promised confidentiality, and – vitally – taking their input seriously in analysis.
Bring the matches
An evaluator’s active listening is the kindling, and the question is the match.

Photo by Louis Moncouyoux on Unsplash
I often use an opening question that I learned about in an Appreciative Inquiry workshop at an AEA Summer Institute, led by Encompass owner and founder Tessie Catsambas. She said that leading with a positive question allowed respondents to brag a little, and showed them that our focus wasn’t on their errors or failings. She suggested something like: “Tell me your proudest accomplishment in this project.”
For 20+ years, I’ve been listening to people talk about their lives and work. So many of them earnestly love what they do, but they don’t get time to stop and reflect, nor do they have someone who actively listens to them about it. As I take frenzied notes, ask follow-up questions, and reflect their words back to them, they light up. You’ve surely seen this too. Even the most taciturn, annoyed staff members have shaken my hand after an interview, looking like they’ve run a marathon and finished in the top ten.
Useful messiness
In my blog I talk a lot about how messy, nuanced, and complex human endeavors are, and the multiple, overlapping, sometimes contradictory pathways of the projects we evaluate. If you ask a project staffer about their proudest accomplishment, the answer isn’t one word or phrase. They weave in the “how” and the “why”, who was involved, what they did and what others contributed, the changes over time, how it could work if it was tried today, what else had to be in place for it to have worked, how they kept it going… Evaluators who do their homework ahead of time can ask better follow-up questions, identify invisible barriers and enabling mechanisms, and see the world from their perspective, using all the cognitive empathy of which they are capable.

(Own drawing. My musical knowledge would fit in a thimble with room to spare. But you get the idea!)
This iterative dynamic – your curiosity and empathetic engagement – deepens the interview. It gives the evaluation team more to work with, more to follow up on, more to analyze alongside responses from others.
All of this applies as well to funders, fundraisers, government officials, civil society – whoever is interested in project success. Their perspectives will shift based on where they sit, their own goals, and whether they believe evaluation can help. But as human beings, they’ll probably be concerned about being judged; they’ll probably enjoy telling you about their work; and they’ll probably have a few of the puzzle pieces you need.
An example of evaluator effects on project staff
I led an evaluation of a multi-country, multi-donor project a few years back. From the first day I met with its combative team lead (he called himself the project’s “CEO”), I knew it would be a difficult slog. The “CEO” kept a tight rein on his team and probably instructed them on how to respond to us. He made it clear to us that he thought the evaluation would be worthless.
Still, we had a mandate from the project’s funder. So, slowly but surely, we interviewed the staff. Some were very guarded. But people liked the “proudest accomplishment” opening question. It became easier to get interview time slots. Staff members were more candid. New interviewees came to the interviews ready to answer the “proudest accomplishment” question, even before we asked it. Most component leads made time for two or three interviews each.
Several months in, the surly team lead called me out in a meeting. He asked me, “When are you going to ask me about my proudest accomplishment?” I had a first interview with him the next day.
So what was our evaluator effect? We may have changed mindsets about evaluation, by being positive and collaborative. We interrupted their work but also celebrated their successes – and, ultimately, their challenges. They were already thoughtful professionals – we did not “cause” them to reflect on their practice. But even thoughtful professionals have blind spots or things to learn, and our queries may have had small, catalytic effects on the thinking and conversations that followed. By the time we presented the results, staff participated in lively conversation. The wary posture we found upon arrival was long gone. Plans were also already in the works to address multiple recommendations we had made.
Evaluator effects on project participants
Project participants experience evaluator effects: logistical effects, like losing two hours of work responding to a detailed survey, or knowing they are being “observed” with an evaluation team coming into one’s classroom, small business, clinic or community. We need to plan how to eliminate, minimize or mitigate those effects. Some suggestions are below.
Evaluation teams, even if they do not intend to do harm, often arrive in a refugee camp or poor rural community in a nice, air-conditioned vehicle, having slept, bathed and eaten well in a decent hotel in a capital city, floating on the rewards of being born or educated into a ruling or privileged class. Evaluators’ broad knowledge of projects, programs and countries is itself a privilege. This background can breed complacency, and an attitude of having “seen it all”.
Thus the evaluators arrive not having deeply thought about the challenges the participants are facing. This is an important topic for team selection and preparation. Evaluators’ humility and empathy are among most valuable skills we can offer in the field. We can never be neutral – but we can too easily be unsympathetic, and we can reinforce respondents’ sense of being at the will of powerful “others.” They may not feel they can refuse our request for their time, or may feel they must praise the project, or they may have other reactions we never hear. This is all the more reason to enter thoughtfully, humbly, and gratefully – and ready to listen
The first place we make a difference is by recognizing the humanity and the agency of each participant – whether they’re poor, in the crosshairs of catastrophe or conflict, living with disabilities, marginalized by society, or some combination. It helps to imagine yourself in their shoes. Not just, “Ah, there but for the grace of god go I…” and on to the task at hand. Instead: thinking about how it would feel to live in a refugee camp and stand in line for rations; or use a wheelchair in a country without well-paved sidewalks and ramps; or to have to hide your sexuality and identity on pain of prison or death.
Grounding our work in humility and empathy also improves our data collection and analysis if we face these issues clearly. Evaluation research is sometimes said to be extractive, and it’s not hard to see it from that perspective – especially if we arrive like gods in our chariots, without even attempting to put ourselves in the shoes of the people we’re interviewing or “observing”. Careful preparation and active listening, among other skills, improve our data quality and the quality of these interactions.
Take steps
We can improve our empathy and humility so that evaluator effects are at least potentially positive:
- Before fielding any research, exhaust existing data and/or piggyback with other studies. Usually we only bring up survey fatigue during analysis, as an excuse for poor data! But it is real, and it falls to us to do our research and collaborate on content and timing, to minimize how often these populations are surveyed.
- Design with the end in mind. You wouldn’t expect an evaluation to turn out participatory, inclusive and respectful without concentrated effort from the design stage. Try the principles of the Local Ownership in Evaluation document from ALNAP, or Institute for Development Studies’ webpage on Participatory Methods. Within whatever constraints your evaluation faces, how can you modify your design for greater local ownership and engagement?
- Stop using the term “beneficiaries” and thereby assigning these human beings a subservient, dependent role. They are agents of their own destinies and deserve the respect of equals, not the condescension of outsiders, however well-meaning.
- Develop instruments and protocols with genuine local input. What looks normal to Global North survey respondents (or local teams educated in the Global North) can be threatening or nonsensical to respondents outside of those contexts, or can just waste their time. See my post on language and translation for more on this topic.
- Re-consider offering incentives. In recent years studies have not provided survey incentives, saying they might “skew results”. But we are taking productive time from people who can least afford it, and the results may be skewed anyway toward those who have time to spare. Instead, ask them how the incentive affects them!
- Establish thorough safeguards, particularly where respondents are recovering from trauma. Experienced, appropriate interviewers and psycho-social support resources are minimum standards.
- Accept answers neutrally, and include them in analysis – even if you think you already know what they mean. Categorizing some responses as “trying to get another project”, calling them social response bias, or saying they are “telling us what we want to hear” – these are all ways of discounting participants’ perspectives in favor of our own preconceived notions.
- Share results with affected populations and respondents – it’s quite frankly astounding how often aid groups talk about doing this, and then how infrequently they actually do it. Even if you do nothing more to promote local ownership of your evaluation, add “results discussion” visits into the budget!
These issues (and others) constitute an ethics crisis in evaluation, and only by foregrounding participants’ dignity and designing to minimize bias and burden can we hope to leave a mark that is even potentially positive. There are more evaluator effects to discuss. My next blog post will dive in to what mutual evaluation capacity building can mean in the context of an evaluation.
Conclusion
Evaluator effects in international cooperation evaluations demand a plan that starts at the earliest stages. Facing project teams’ uncertainty about evaluation means recognizing the unique position evaluators occupy. We can be a sounding board and a catalyst for implementers’ improvement, by involving them in design, listening actively to their triumphs and their challenges, and taking their interpretations on board.
For project participants, evaluators’ ethical imperative is to foreground participants’ dignity and agency, while minimize burden. By building in time to limit unnecessary fieldwork, improve instrument translations, identify opportunities for participatory methods, and ensure evaluators bring their humility and empathy, we can hope to leave a positive mark.
Finally, there are major evaluator effects on evaluation teams – and this blog post is awfully long already, so I’ll talk about it in an upcoming post and include that link here when it’s ready. When we pay genuine attention to the opportunities for mutual learning, both the team and the evaluation benefit.





