摘要:The Demand-Driven Evaluations for Decisions (3DE) programme was piloted in Zambia and Uganda in 2012–2015. It aimed to answer evaluative questions raised by policymakers in Ministries of Health, rapidly and with limited resources. The aim of our evaluation was to assess whether the 3DE model was successful in supporting and increasing evidence-based policymaking, building capacity and changing behaviour of Ministry staff. Using mixed methods, we compared the ex-ante theory of change with what had happened in practice, why and with what results (intended and unintended), including a qualitative assessment of 3DE’s contribution. Data sources included a structured quality assessment of the five impact evaluations produced, 46 key informant interviews at national and international levels, structured extraction from 170 programme documents, a wider literature review of relevant topics, and a political economy analysis conducted in Zambia. We found that 3DE had a very limited contribution to changing evidence-based policymaking, capacity and behaviour in both countries as a result of having a number of aspirations not all compatible with one another. Co-developing evaluation questions was more time-consuming than anticipated, Ministry evidence needs did not fit neatly into questions suitable for impact evaluations and constricted timeframes for undertaking trials did not necessarily produce the most effective results and value for money. The evaluation recommended a focusing of objectives and a more strategic approach to strengthening evaluative demand and capacity. Lessons emerge that are likely to apply in other low- and middle-income settings, such as the importance of supporting evaluative thinking and capacity within wider institutions, of understanding the political economy of evidence use and its uptake, and of allowing for some flexibility in terms of programme targets. Fixating on one type of evidence is unhelpful in the context of institutions like ministries of health, which require a wide range of evidence to plan and deliver programmes. In addition, having success tied to indicators, such as number of ‘policy decisions made’, provides potentially perverse incentives and neglects arguably more important aspects such as incremental programmatic adjustments and improved implementation.