Evaluation

New approaches

Development policy is supposed to achieve results. For a long time, nonetheless, attention was paid exclusively to inputs. After commitment to the UN Millennium Development Goals, that practice no longer makes sense because specific goals are to be achieved by 2015. Results must therefore be measured. For good reason, Germany’s Ministry for Economic Cooperation and Development (BMZ) is planning to set up a new, independent evaluation institute.


By Hildegard Lingnau

Only a few years ahead of the target date for achieving the Millennium Development Goals (MDGs) by 2015, the challenge is not only to provide the necessary resources but also to measure the outcomes. There is, however, an infinite number of causal links. In the past, most efforts to accurately assess impacts failed because of the “attribution gap”: in many cases, it could not be said for certain that a particular input led to a particular result. Indeed, it cannot even be taken for granted that a development project, programme or policy has any positive impact at all.

Unfortunately, evaluation in Germany does not have a great reputation. Complaints of “snapshots” and “impressionistic reports of success or failure” tend to be justified, and there is often reason to suspect that evaluations basically serve the PR interests of the commissioning agency.

Greater clarity is needed – and possible. Impact evaluations are based on robust methods for identifying causal links. Ultimately, they seek to answer one question: How would outcomes have been different if a particular intervention had not taken place? To evaluate such matters reliably, treatment groups are compared with control groups – if possible, before an invention, after it and later once more to see how impacts have unfolded. Other influences are controlled by randomising group assignments. This methodology allows evaluators to establish causal relations reliably.

Impact evaluations depend on standard methods of quantitative and qualitative research. For these methods to be more than instruments for collecting data, however, their application needs to be based on theories and hypotheses. Evaluators must make plausible assumptions on how one thing leads to another in order to be able to test causal chains – or to rule them out. All summed up, this means that all potential impacts (whether desired or merely probable) on poverty, for instance, must be considered carefully before any intervention is started. In spite of being of obvious use, this kind of poverty impact analysis is not currently performed as a matter of course.

On the basis of contra-factual analysis, evaluators are in a position to advise pol­icymakers competently on what kind of action will lead to what kind of results. Impact evaluation thus has two defining characteristics: it is theory-based and policy-relevant.

Sound impact evaluation ultimately depends on being designed at an intervention’s planning stage. It has to fit the specific context. The challenge is quite demanding. Obviously, the evaluators need to understand development theories and evaluation methodology, be experienced in development cooperation and have a profound knowledge of the local setting.

Results come first

The special thing about impact evaluation is that it forces those who design interventions to start considering ends right from the start. They need to define outcomes and then work backwards, gearing projects and programmes to impacts. They must identify control groups, establish baselines and conduct the relevant surveys in order to thoroughly evaluate impacts.

The effort is worthwhile. Impact evaluation does not only grasp minor effects but also captures broad-based structural change. It should therefore also be applied to complex interventions such as programme-based approaches and programme-based joint financing. Joint multi-donor evaluations would make much sense too.

Impact evaluation does not only research what has been achieved. It also serves to identify factors that speed up or stand in the way of success. Impact evaluation thus produces the evidence needed to improve development cooperation continuously (see box).

Furthermore, impact evaluation serves to assess quantitative as well as qualitative aspects (like drinking water quality or the efficiency of healthcare facilities et cetera). This should matter to policymakers involved in defining hopefully more meaningful goals, targets and indicators for the post-MDG era.

Impact evaluations have even more advantages:
– They respond to developing countries’ explicit desire for more policy space, and they boost their sense of ownership.
– They strengthen the people whose welfare, education and health are at stake.
– They establish responsibility and thus dispel the prevailing system of “organised non-accountability”.
– They increase effectiveness by intro­ducing a results-based process and shifting attention away from funding.
– They boost efficiency because they reduce transaction and other costs.
– They enable precise lessons to be learned from experiences.
– They can inspire new confidence in development policy through evidence-based knowledge.

A new evaluation culture

Development policy and cooperation need a new culture of evaluation in order for decisions to be based on reliable research that shows what works and what does not. So far, decisions often tend to be based on opinions and preferences of institutions, lobbies and think tanks. It is time to launch a quality campaign, which gives impact evaluation a more prominent role.

Various multilateral and bilateral donors (including Germany) are currently reconsidering their approaches to evalu­ation. The BMZ is preparing to establish a new, independent evaluation institute at an auspicious time. It will be able to build on foundations laid by others. Another relevant initiative is the Network of Networks on Impact Evaluation Initiative (NONIE) which involves the OECD, the multilateral development banks and the UN. Important work has also been done by the World Bank’s Independent Evaluation Group (IEG) and its Development Impact Evaluation Initiative (DIME). On the academic front, the International Initiative for Impact Evaluation (3ie) has also played a ground-breaking role.

The next steps

Re-considering of evaluation practice in development affairs should result in:
– the establishment of impact evaluation as good or even best practice,
– further improvement and intelligent mixing of quantitative and qualitative tools,
– ex-ante preparations in project/programme planning to facilitate impact evaluation,
– increased application of ex-ante poverty impact analyses,
– more synthesising reviews to collate
existing evidence relating to specific sectors,
– resultsoriented cooperation with partners in developing countries with the goal of making impact evaluation the standard practice and
– refocussing on the capacities, needs and knowledge demand of developing countries whose interests must be centre stage, since the aid effectiveness principle of national ownership must apply to evaluation matters too.

The approaches outlined above will allow for the better use of funds and lead to greater impacts. They will contribute to meeting the Millennium Development Goals and strengthen national and popular policy ownership in developing countries. These approaches are also prone to reduce the appeal of high visibility flagship projects to the benefit of interventions with broader impacts and proven effectiveness.

At the same time, the new approaches are likely to tone down donor institutions over-assertive self-esteem. They are not omnipotent; they can merely help achieve impacts. The key players are the governments, civil society organisations and private sector companies in the developing countries.

Related Articles