«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
Bollen et al found a serious lack of consistent and systematic information about inputs, activities, and outputs/outcomes in the sample of evaluations they studied. Baseline conditions were rarely fully recorded before implementation of programs began, and no reference or comparison groups were used to help establish whether other trends or external conditions, rather than USAID’s DG programs, were responsible for the observed outcomes. They therefore concluded that it would be impossible to use these documents as the basis for a retrospective evaluation of the effectiveness of DG programming.9 A subsequent SSRC project recomThis study and the issues related to USAID’s current evaluation process are discussed in
mended a multimethod approach that would both provide for retrospective analysis to learn from past efforts and lay the foundation for a greater capacity to evaluate future DG work (Bollen et al 2003, SSRC 2004).
The first major piece of research sponsored by SORA in response to the SSRC recommendations was the large-scale, cross-national, quantitative analysis examining the effects of USAID democracy assistance on democracy building described above. The first phase of this research, including both the analysis and the dataset, was released in 2006 and the results are beginning to appear in academic journals (Finkel et al 2007, Azpuru et al 2008). The second phase of the quantitative analysis, which added several years of data and examined key issues related to the first set of findings, was released in early 2008 (Finkel et al 2008). The NRC project described in this report is the newest SORA-sponsored activity.
SORA’s Charge to the NRC Committee Noting the problems that the Bollen et al (2005) review found of past USAID project evaluations, USAID asked the NRC for help in developing improved methods for learning about the effectiveness and impact of its work, both retrospectively and in the future. Specifically, the project is to
1. A refined and clear overall research and analytic design that integrates the various research projects under SORA into a coherent whole in order to produce valid and useful findings and recommendations for democracy program improvements.
2. An operational definition of democracy and governance that disaggregates the concept into clearly defined and measurable components.
3. Recommended methodologies to carry out retrospective analysis.
The recommendations will include a plan for cross-national case study research to determine program effectiveness and inform strategic planning. USAID will be able to use this plan as the basis of a scope of work to carry out comparative retrospective analysis, allowing USAID to learn from its 25 years of investment in DG programs.
4. Recommended methodologies to carry out program evaluations in the future. The recommendations for future analysis will focus on more rigorous approaches to evaluation than currently used to assess the impact of democracy assistance programming. They should be applicable across the range of DG programs and allow for comparative analysis.
5. An assessment of the feasibility of the final recommended methodologies within the current structure of USAID operations and defining policy, organizational, and operational changes in those operations that might improve the chances for successful implementation.
0 IMPROVING DEMOCRACY ASSISTANCE To respond to USAID’s request, the NRC created the ad hoc Committee on the Evaluation of USAID Democracy Assistance Programs, whose members bring expertise in the major areas of USAID activities, direct experience with USAID projects, and expertise in the contributions that social sciences methodology for comparative political analysis could make to improving USAID’s evaluations of its work. Appendix A provides biographies of the committee members, and Appendix B gives information about the meetings that were the core of the committee’s deliberations.
Responding to the Charge USAID thus approached the NRC with two broad questions: (1) How can we learn to more effectively support democracy in particular countries and contexts around the world? (2) How can we learn where and whether our specific DG assistance programs have been effective?
While it is tempting for a committee such as this one to draw guidance from current democracy research to advise USAID on how best to pursue democracy assistance in varied circumstances, it is the committee’s firm view, based on its review of the evidence, that any such advice would be premature. As already discussed, the current state of the academic literature on democratization is highly contested, and the topic of democracy assistance has only very recently become a focus of academic research.
Thus the committee believes that it cannot simply draw on current academic research to answer these questions.
For example, while the committee knows that many researchers would have views on such questions as whether democracy assistance should be “sequenced” in a certain way across sectors, on whether targeting corruption or promoting decentralization are effective ways to advance democracy, and whether democracy assistance is futile in countries under authoritarian rule, it is fairly certain that it would not find widely accepted consensus answers to these questions. Even one of the oldest and most central debates about democracy assistance—whether it is more fruitful to help poor countries develop democracy first, as that will help their subsequent economic growth, or whether economic growth should first be promoted, as this will lay a foundation for subsequent transition to a more lasting democracy—remains far from settled (Przeworski et al 2000, Halperin et al 2004, Gerring et al 2005, Carothers 2007).10 Moreover, practical problems have raised new issues in democThe pages of the Journal of Democracy are filled with precisely such debates. The chair of
racy assistance, such as the degree to which democracy assistance can, or should, be pursued in conjunction with security provision by military forces. Past conventional wisdom was that military forces and civilian aid programs should be kept strictly separated, yet conditions in many countries have forced DG programs to work in close partnership with military forces or even for military forces themselves to become agents of government reconstruction and DG assistance (Goldstone 2006, U.S.
Department of State 2007). Many questions have arisen about how best to provide democracy assistance in these new circumstances.
Given this uncertainty on broad matters of strategy, the committee has focused on the second question, for there the committee believes it can suggest procedures by which USAID can draw on the work of the academic and policy communities, as well as its own experience in democracy assistance, to make substantial advances in learning which of its DG assistance programs are most effective. Moreover, the committee would go so far as to argue that in the current state of scientific research, answering the second question is likely the best way to also answer the first. That is, the committee believes that the fastest way for USAID to improve the effectiveness of its democracy support programs around the world is to determine which of its programs really work, and how well, in regard to advancing such concrete goals as improving the skills of legislators and the autonomy of judges, reducing corruption, enhancing popular participation, and ensuring free and fair elections. By building its stock of knowledge on which of its DG projects best accomplish these goals, and to what extent and at what cost in specific circumstances, USAID will improve its ability to assist those seeking to advance democracy on the ground in complex and demanding conditions.
At the same time, the committee recognizes that aside from learning about its programs’ effectiveness, DG officers require a constant stream of information on program management as well as special evaluations of what happened when things turn out unexpectedly. In addition, since the field of academic research on democracy and democratization is racing ahead, USAID needs to keep abreast of useful findings and be aware of shifts in views and the emergence of consensus when they occur. Thus or other) would be best suited for particular countries? The experts were wholly unable to agree on anything except that each system has trade-offs and that individual countries would have to choose what they thought met their needs. Even in the most critical cases— such as choosing a voting system for Iraq’s first post-Hussein elections—disagreements are severe. The experts who developed the system for Iraq argued that proportional party-list voting would best build on existing organizations’ strength and reward high turnout. Other experts argued that such a system would damage democracy by encouraging voting blocs built along ethnic and religious lines. On almost any such aspect of democracy assistance, similar disagreements can be found.
IMPROVING DEMOCRACY ASSISTANCE the committee also sees as its responsibility suggesting procedures and organizational reforms that will assist USAID in a broad span of learning activities. These include efforts at improving measures of democracy, learning from comparative and historical case studies of democratization, and developing a diversity of designs for project evaluation. It also includes outlining incentives and procedures to increase active learning and the application of new knowledge and ideas to the planning and implementation of DG activities.
Major Findings and Recommendations The committee considered both retrospective and prospective approaches to studying USAID activities and how to make best use of methods ranging from case studies to randomized evaluations to the structured sharing of USAID DG officers’ experiences through debriefings and conferences. Based on this work, the committee’s most important
conclusions and recommendations are:
• Most evaluations of DG programs have been designed to meet a variety of diverse monitoring and management needs. While yielding valuable insights, they have not provided compelling evidence of program effects. Collecting the information needed to most clearly determine the impact of DG projects—including before and after measurements on key outcome variables, documentation of changes in policy-relevant outcomes rather than activities completed, and measurements on both the groups receiving assistance and control or comparison groups that did not—is not currently part of most monitoring and evaluation plans for DG programs.
• USAID needs to gain experience with impact evaluations, including those using randomized designs, to learn whether they could improve its ability to more accurately ascertain the effects of its DG programs. If their feasibility is demonstrated for a wide range of DG projects, impact evaluations could provide critical information on what works best, and under what conditions, in democracy assistance.
• Such impact evaluations could take a variety of forms, depending on the character and conditions of specific DG programs. Large N randomized evaluations provide the most accurate and credible determination of the impact of aid programs and should be used where possible.
Field studies suggest that many current DG programs (e.g., decentralization programs) could be studied using randomized designs. For those DG programs where randomization is not suitable, other impact evaluation
DEMOCRACY ASSISTANCE AND USAIDdesigns are available, ranging from studies with matched or national baseline comparison groups to single-case studies that use time-series data to examine how outcomes change over time in response to USAID DG assistance.
• There is considerable skepticism, among both scholars and policymakers, regarding the feasibility and appropriateness of applying rigorous impact evaluations to DG activities. On this committee, Larry Garber emphatically shares these concerns. Most of the committee members, on the other hand, while acknowledging and respecting the skepticism among many policymakers, believe that rigorous impact evaluations of DG projects are feasible and that they will provide the most accurate and credible way for U.S. taxpayers as well as the citizens of the countries in which USAID funds democracy programs to gain assurance as to which DG programs work and which do not.