Hopp til hovudinnhald

Evaluation Methods

The field of evaluation is under constant development. Here follows a brief overview of important methodological choices.

he field of evaluation is constantly evolving, and effective practice requires thoughtful choices of methodology. Below is an overview of fundamental methodological options. 

Evaluations draw from a wide spectrum of social science research methods, reflecting the diversity and complexity of the field. Here is a concise summary of some key methodological categories. Each area encompasses various specific methods and techniques. Italicized terms can serve as useful search keywords for further information. Most evaluations employ a combination of different methods. 

It is essential that, in every evaluation, the methods chosen are well-suited to answer the specific evaluation questions. Methods must be properly justified and described in sufficient detail so that it is clear how findings and conclusions were reached and on what evidence they are based. 

Quantitative methods use measurable and countable data. When appropriate data can be obtained, quantitative approaches are especially effective in providing clear and precise information—such as the degree to which the goals of development assistance programs have been achieved. In some cases, where extensive data is available, it may even be possible to determine whether the development assistance directly caused these results. 

Qualitative methods focus on information that is more challenging to quantify. These methods are particularly useful for understanding how assistance operates in complex social contexts and for analyzing information that numbers alone cannot capture. Both quantitative and qualitative methods offer unique strengths and limitations, making the choice between them a significant part of planning any evaluation. For example, one approach is to survey hundreds of individuals and analyze their responses statistically (quantitative), while another is to conduct in-depth interviews with a handful of people to gain deeper insights (qualitative). There is no universally best method, as different methods yield different types of understanding—thus, most evaluations integrate both types. 

These methods can provide valuable insights into goal achievement, evaluating whether targeted improvements have occurred in areas receiving development assistance. However, both quantitative and qualitative methods have challenges in establishing causality—that is, determining whether observed improvements are truly the result of the development assistance intervention. 

Impact evaluations represent a distinct category. They seek to provide the most reliable information possible about the effects of interventions using experimental or quasi-experimental methods. 

Impact evaluations compare two groups: one that receives the development assistance intervention, and a “control group” that does not. If these groups are similar and no other external factors influence them, this setup enables a counterfactual analysis: if a change is observed in the intervention group but not in the control group, the change can be attributed to the intervention. 

To ensure group similarity, researchers can randomly assign participants to receive the intervention (experimental method, or randomized study), or employ quasi-experimental methods to create a simulated control group. The rigor of these methods directly affects the reliability of the results. 

An impact evaluation also explores how and why an intervention produces certain effects, utilizing both quantitative and qualitative approaches. 

It is important to note that impact evaluations are resource-intensive—especially if not planned at the outset of a project—and cannot be used for all types of development assistance or evaluation questions. 

For examples and further reading on impact evaluations, visit 3ie or the World Bank’s resource pages on impact evaluations. 

Causality in Development Assistance 

Responsibility for documenting goal achievement for each development assistance measure rests with those managing the intervention. However, demonstrating that an intervention met its objectives does not necessarily mean that the assistance itself caused the success. 

Evaluations often seek to assess causality, helping to determine the actual impact of development assistance. Key questions include: 

  • Has the intended improvement been achieved (goal achievement)? 
  • Is it likely that the development assistance contributed to the improvement (contribution analysis)? 
  • Can we reliably measure how much the development assistance contributed (attribution)? 
  • What would have happened without the intervention (counterfactual analysis)? 

In many cases, it is extremely challenging or even impossible to answer questions about attribution or counterfactuals with certainty, due to lack of sufficient data or suitable methods. Nevertheless, evaluators can always make an informed assessment of the likelihood that an intervention has contributed to achieving its goals.