Everyone agrees that there is a crucial role for research and evidence in development policy-making. But the apparently simple claim that policy decisions should be based upon clear and rigorous evidence of value and effectiveness masks the inherently political – and normative – nature of how we define what counts as ‘valid evidence’, and what this means for policy decisions. This is the argument put forward in the excellent recent paper by Andries du Toit. In it, he warns that the dominance of the Evidence Based Policy (EBP) agenda has in some ways narrowed the value of research evidence in policy making because the “technocratic concern with ‘what works’” ignores the many other ways in which research and evidence can and should inform, illustrate or challenge policy.
As the first paper in this DLP series on ‘the politics of evaluation’ suggested, when it comes to monitoring and evaluation practice, the same practices tend to apply. Despite the strength of evidence of the value of a ‘mixed methods’ approach to evaluation for social change – and for many of the reasons highlighted in du Toit’s critique of the EBP discourse (the desire for simple ‘take-home’ advice, institutional incentives, the emphasis on experimental evidence) – monitoring and evaluation practice remains focused on short-term results, narrow interpretations of value-for-money and an overemphasis on the value of experimental evidence.
It seems that we are on the frontier of a narrative shift: between a technical, rational, and scientific approach to development, and a recognition that politics matters; that poverty reduction is not a technical problem but requires significant social change, and that this social change is, and must be, both political and locally led. But how does monitoring and evaluation practice and the use of evidence in policy-making reflect and react to these shifting explanatory frameworks? And, what can programs that are operating in this newly emerging niche of policy – focused on the politics of social change – do about it?
How can these programs and their evaluators navigate the narrow and tricky path between the pressure to meet existing evaluation and reporting requirements, on the one hand, and the desire to build a strong evidence-base to support the assertion that ‘working politically’ can produce stable and positive long-term development outcomes, on the other? How can they both remain engaged with the dominant value-for-money and results-oriented paradigms, and also push for the space necessary to test and validate the new and evolving ways of ‘thinking and working politically’? And what can donors and other development organisations do to support this?
This, the second paper in DLP’s series on ‘the politics of evaluation’, draws on the experience of the organisations that participated in the DLP ‘Politics Matters’ workshops, (all of which have a number of programs that are ‘working politically’) to offer some answers to these and other questions and to suggest some areas for further exploration.