Welcome to the Comuá Network!

Decolonizing Evaluation: 4 conclusions from a panel of donors

Per Ben Bestor

In recent years, the chorus of voices calling for the decolonization of aid has been growing, demanding a reevaluation of the way programs are designed and even the way they are delivered. This evaluation – a process that consists of critically and systematically analyzing the design, implementation, improvement or results of a program – is an integral part of a broader dialogue about decolonization.

When it comes to evaluating a project or program, it is worth reflecting on a series of questions. What constitutes “effectiveness”, how is it measured, and who determines it? Whose values, priorities and worldviews shape the assessment? Historically, it has always been donors and international non-governmental organizations (INGOs) – in other words, external parties – who determined what would be evaluated, when it would be evaluated, by whom and based on what methodologies, accepting few relevant contributions from the people concerned. be achieved by the programs in question. This needs to change. But what will this change look like?

Decolonizing evaluation involves focusing on the people who are doing the work and how the work is being done. It means, firstly, putting financiers, evaluators, implementers and communities on the same level. Second, it means identifying and addressing existing power imbalances in the evaluation system, from the design and implementation of the evaluation to the dissemination and use of its findings.

This is challenging because it forces us to reconsider not only the way we conduct assessments (e.g., the methods used), but also the way we think about assessments (e.g., the purpose of the assessment).

On September 22, representatives from three funders sat down with the InterAction Program Effectiveness and Evaluation Community of Practice (EPE CoP) to discuss how they are addressing these challenges.

Subarna Mathes of the Ford Foundation, Colleen Brady of USAID, and David Burt of the Start Fund spoke about what it means to decolonize, or change the power structure in evaluation practice, and how their organizations are addressing the issue. Here are four conclusions that resulted from this conversation:

  • 1 We need to go beyond tokenism: Changing the power structure in evaluation practice requires more than symbolic participatory approaches; it requires deep involvement of local parties involved throughout the assessment process, and even before it. “Often, the first thing that comes to mind when we think about participatory approaches is how to integrate local communities into data collection or data analysis processes; for example, hiring local employees to act as enumerators or field agents,” says Colleen. She adds: “Participatory approaches to evaluation need to start from participatory approaches to implementation,” before the evaluation even begins. Subarna echoed these thoughts, noting that “if we don't think about ways to distribute power in the design [of the program] and in the distribution of who receives the resources, evaluation will already enter the game a little late.” By integrating partner voices into the program design process, an organization can take steps to decolonize not only the evaluation, but the program itself. This takes time and intentionality, but it generates a stronger evaluation, with greater acceptance.
  • 2 The evaluation must consider the learning of all parties involved: When it comes to assessments, more emphasis needs to be placed on learning and adaptation than on compliance and accountability. Ultimately, the primary goal of assessments should be to produce useful knowledge. But for whom should knowledge be useful and whose “usefulness” should be prioritized when designing an assessment? Evaluation cannot only serve the funder’s learning. It is critical that the learning provided by evaluations benefits both funders and communities. It is important to invest time and resources to get information back to communities. It is necessary to close feedback loops by sharing assessment results with all interested parties, thus ensuring that learning stimulates continuous improvement and appropriation of results at all levels. As one of the participants noted on an interactive MURAL board during the panel, “the evaluation needs to add equal or greater value to the participants, so that it is relational and has the effect of adding, not subtracting”.
  • 3 It is important not to impose methods or approaches: In this segment, there has historically been a preference for or greater reliance on certain evaluation methods and approaches. Funding often depends on communicating certain metrics or evaluating topics of interest to funders. Consequently, measurement and evaluation structures end up being imposed on organizations and influenced by power dynamics. David points out that “the fear of not getting funding in the future is often enough to stop organizations from trying new things or changing their methodologies,” even if the methods or metrics don't make much sense. The danger of sticking to a funder's preferred method or approach, regardless of the context or circumstances, is that it can mean missing out on important knowledge and learning. For example, imposing a certain method or metric without taking into account the context or views of local communities can produce misleading conclusions, meaning that the findings of an evaluation may not accurately reflect the experiences of the people served. Instead, funders should be open to working with partners, evaluators, and communities to determine appropriate methods and approaches in each context. The assessment must be a joint creation of all parties involved.
  • 4 – Relieve local partners: The panelists identified several ways in which funders can relieve their partners. One way is to speak the local language. In practice, this may include issuing requests for proposals (RFPs) or accepting reviews written in other languages. The English requirement creates a barrier for non-English speakers or those who do not have English as their primary language. Instead of focusing on the work that really matters, partners will be busy translating documents. Using the local language also improves accessibility for local communities, ensuring they are able to assess, confirm and share findings in their own language.

A second practical measure is not to impose onerous requirements, whether responding to long RFPs, undertaking extensive data collection endeavors, or producing reports that are nothing more than multiple-choice forms with no practical application. Subarna explained how Ford has taken steps to simplify its RFP process for reviewers, including eliminating page limitations for presentations and the requirement for a detailed budget or work plan. Instead, Ford takes a high-level approach, initiating a dialogue with the evaluator(s) before a decision is made. In terms of data collection, funders can de-emphasize collecting large volumes of data that will never be used or have only a tangential relationship to the program in question. If the issue is not of central importance to the program, partners should not devote valuable time to collecting data on it. Third, funders must clearly communicate their expectations from the beginning, as early as the RFP process. Many evaluators have been presented with unclear RFP processes that require vast financial and human resources and a large time commitment on the part of the evaluator without clearly expressing what the funder is actually seeking. Lenders can help appraisers by clearly describing exactly what they want, when and how. For example, in an RFP, state your objectives and present evaluation questions, state your budget, and explain what you are looking for in an evaluation partner and your criteria for selecting one. Provide a timeline for the evaluation and selection process. And most importantly, ask for feedback from all candidates to improve processes in the future.

Want to know what else these funders had to say about decolonizing evaluation and how it’s being done in their organizations? Full recording of the event is available here.


*The text above was originally published on the Interaction blog: https://www.interaction.org/blog/decolonizing-evaluation-4-takeaways-from-a-donor-panel/

Ben Bestor is senior coordinator of global development and learning programs and policy at Interaction.

KEEP READING

Foto: Rebeca Roxani Binda - Volta Grande do Xingu
Territórios, clima e modos ...
28 de April de 2026
Assembleia de sócios 2026 | Divulgação: Rede Comuá (@barcelosnotbarreto)
Rede Comuá realiza assemble...
27 de March de 2026
LUCT7943
Fundo Brasil completa 20 an...
27 de March de 2026
Belém (PA), 11:11:2025 - Indígenas participam da inauguração da Aldeia COP. Foto- Bruno Peres:Agência Brasil
Visibilidade em Disputa: So...
18 de March de 2026
Loading more articles....Please wait!