D+C Newsletter

Dear visitors,

do you know our newsletter? It’ll keep you briefed on what we publish. Please register, and you will get it every month.

Thanks and best wishes,
the editorial team

Register

ODA

Evaluation 2.0

by Linda Engel

In brief

In remote or dangerous regions like South Sudan, digital applications could make project evaluation easier.

In remote or dangerous regions like South Sudan, digital applications could make project evaluation easier.

In the field of international cooperation, evalu­ation plays an important role in accountability as well as in institutional learning. Experts therefore want agencies to invest more in evaluation. Digital technologies are providing new options. Some of them are particularly useful in conflict regions.

Accountability to taxpayers is becoming more and more important in the context of official development assistance (ODA). That is the assessment of Jan Tobias Polak of the Austrian Development Agency. Unlike, for instance, health services in their own countries, taxpayers cannot personally assess the achievements of ODA. Therefore the results of evaluations are helpful. Internally, however, evaluations primarily serve organisations as learning tools.

In line with the 2030 Agenda, it would be preferable if partners in developing countries themselves evaluated the work of donors in the future. On the other hand, Jörg Faust of the German Institute for Development Evaluation (DEval) says that ODA is still quite donor-driven. He predicts an uptick in competition for evaluation assignments in the short term. Think tanks, universities and freelance consultants are keen on such work, he argued during a discussion about the “Future of Evaluation”. It was organised by the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) in Bonn in May.

Faust also sees a structural problem, given that agencies have so far shown little interest in investing more money in evalu­ations. Evaluations must be comparable to be useful moreover. Assessing only individual programmes does not deliver very valu­able results. To assign research to universities, however, is normally expensive, so agencies prefer to run evaluations of their own. The results are thus not always objective.

Nowadays experts argue less about the right methods than they did a few years ago. Different approaches are taken in different settings – ranging from rigorous impact analyses based on randomised control trials (RCTs), which rely on randomly chosen control groups, to qualitative methods like group discussions. Expert debate today is focusing on issues that relate to the circumstances in which evaluators work. How will digitalisation change evaluations? And can standard approaches be taken in context of fragile statehood?

What impacts digitalisation will have is still quite unclear. Perhaps the methods will stay the same, but some elements – for example, interviews – could be done via digital applications like Skype. That would be useful in conflict regions that are too dangerous to visit in person. Another important issue is big data: huge volumes of mostly automatically generated data, which are gathered and stored with the help of digital technologies (also note article by Monika Hellstern in D+C/E+Z e-Paper 2018/07, p. 8 as well as Martin Noltze and Sven Harten in D+C/E+Z e-Paper 2017/07, p. 16). In the future, they could provide completely new insights into changes in people and the environment and one day even replace surveys.

It will be interesting to see whether evaluations will adapt to the changing parameters of ODA moreover. Ever more projects are being carried out in fragile states. In the future, ODA will increasingly be expected to respond fast to crises, like addressing the causes of migration. But, says Ricardo Gomez of GIZ, it is the planners who are fundamentally responsible for designing projects in ways that fit the circumstances. Moreover, they must discuss the risks with their paymasters.

An important key word is transparency. For programmes that aim to achieve immediate results, concurrent evaluations could be useful – both in order to keep policymakers informed and to improve project management. In general, however, the most efficient approach will still be to evaluate most projects at the end, says Gomez.

 

Add comment

Log in or register to post comments