At the start of 2023, the KfW Development Impact Lab was managing 18 rigorous impact evaluations (RIEs). These cover the topics of Financial Cooperation (FC) across the board: the majority of evaluations (five) are on the subject of “sustainable economic development, training and employment”, while three evaluations are in the field of “climate and energy”, and a further two concern “peace and social cohesion”. We plan to expand this further in the coming years and also to finance or attract finance for new RIEs.
These evaluations are carried out by various partners. Support comes in part from multilateral organisations, such as the World Bank or the World Food Programme (WFP), and our own impact evaluation unit. Seven evaluations are based on collaboration between ourselves and universities, in particular for design, data analysis and evaluation. Three evaluations are carried out by members of the Development Impact Lab themselves.
Also worth knowing is that the KfW Development Impact Lab also supports RIEs that have been selected via the DEval funding portfolio of the German Institute for Development Evaluation (DEval) and are therefore not directly financed by KfW Development Bank.
RIEs are experimental and semi-experimental methods for measuring the causal effects of a project. In other words, identifying those effects that can be attributed exclusively to the project – and isolating them from concurrent developments or other connections between projects and target indicators.
In addition to specific impacts on the projects’ target groups, RIEs also analyse impacts on subgroups or mechanisms underlying the impacts. For example, a health care project may have significantly greater effects for women than for men, or a new connection to the electrical grid may only lead to productive electricity uses in areas that have access to markets.
The most rigorous methods are fully experimental methods, such as randomised controlled trials (RCTs) and these represent the “gold standard” in the field. In RCTs, a project – or even part of a project – is randomly assigned to a group of individuals, schools, communities or others and benefits from the project (“intervention group”). The second group receives access to the project or intervention at a later time or – as is the case with placebos – not at all (control group). The principle of (controlled) random assignment, similar to medical research, ensures the comparability of the two groups: for example, depending on the intervention, they will on be average the same age, in a similar state of health, ambitious, vulnerable or wealthy. This means that all post-intervention differences between the groups can be attributed to the project itself. A well-known example is cash transfers, which are disbursed to households in the target group if their children attend school.
If a purely experimental (random) assignment is not justifiable or feasible, semi-experimental methods can prove worthwhile. For example, comparison groups can be defined in line with threshold values of certain selection criteria (regression discontinuity design, RDD). Alternatively, a comparison between two similar groups can be made using the difference-in-difference method. Similar to an RCT, a single group then benefits from the project. An estimate is then used to determine the impact of the project based on the comparison.