Crisis prevention

Evaluation and quality control

Peacebuilding operations in fragile states are a delicate affair. The pressure to succeed is high. But how does one assess results – and what needs to be improved in order to achieve the desired objectives?
Surrendering weapons in Herat, Afghanistan, in November 2012. EPA/picture-alliance/dpa Surrendering weapons in Herat, Afghanistan, in November 2012.

Many things depend on the success of peacekeeping and state building  – not least, human lives. Evaluating development efforts is a way to ensure good results in the future. However, the evaluators of civilian measures designed to prevent crises face huge challenges. It is very difficult, for instance, to collect consistent data in complex conflict situations. Solid evaluation, however, helps to check assumptions on what works and what does not in peacebuilding.

A recent study by the Organisation for Economic Cooperation and Development (OECD) identifies the specific challenges:

  • A high risk of violence will limit experts’ scope for asking critical questions, collecting data, recruiting local staff, meeting informants, publishing findings and disclosing sources. Accordingly, their reports are likely to be distorted, censored or incomplete.
  • Evaluators must act in a very flexible manner to tackle processes of unpredictable change that involve many actors and conflicting interests.
  • As there is a lack of generally accepted and proven strategies for civilian crisis prevention, there is no consensus on what indicators should serve to measure success either.
  • The lack of reliable and coherently collected data makes it very difficult to apply statistical methods correctly.
  • Both the politicisation of international relations and sensitivities in national contexts imply that evaluators tend to have little room for manoeuvre.

Evaluators are often suspected to supply their clients with courtesy certificates (see interview with Jim Rugh in D+C/E+Z 2012/07, p. 300). Accordingly, they should rely on as many different sources of information as possible. Philipp Rotmann of the Global Public Policy Institute says: “Impact analyses must be conducted in a participatory approach taking into account all actors involved.” In his view, both the people in general and decision-makers matter.

According to Christine Toetzke of Germany’s Federal Ministry for Economic Cooperation and Development (BMZ), evalu­ations often show that development agencies under-estimated risks and over-estimated what they might achieve. All too often, she adds, the conflict context is not properly assessed beforehand. It is indispensable to define the area and scope of measures precisely, Toetzke said at a conference held by the Protestant Academy Loccum near Hannover at the end of last year.

Government agencies are generally suspected to operate in a wasteful manner, so many development agencies struggle to admit problems, says Thania Paffenholz of the Swiss Graduate Institute of Inter­national and Development Studies. In her view, evaluations should encourage learning, but that will only happen to a rather limited extent as long as evaluations are primarily designed to boost an agency’s legitimacy. She is in favour of building long-term capacity and expertise through self-evaluation, rather than focussing on a kind of report card for individual developmental projects.

Paffenholz argues, moreover, that development agencies must consider their action in the global context, just as donors must stay abreast of what is internationally considered state of the art. Accordingly, Paffenholz finds international research and evaluation partnerships promising. She points out, however, that it is important not to “run things by the rule book”. The reason is that standardised hypotheses on causal relations often do not fit conflict scenarios and must be adapted to specific contexts.

Floreana Miesen

 

Related Articles