«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
General Observations, 44 Current Policy and Legal Framework for USAID DG Assessments and Evaluations, 53 Three Key Problems with Current USAID Monitoring and Evaluation Practices, 58 Conclusions, 66 References, 67 3 Measuring Democracy Introduction, 71
7 Additional Impact Evaluation Designs and Essential Tools for Better Project Evaluations Introduction, 177 How Often Are Randomized Evaluations Feasible?, 178 Designing Impact Evaluations When Randomization Is Not Possible, 181 What to Do When There Is Only One Unit of Analysis, 192 Conclusions, 196 References, 197 8 Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning Introduction, 199 Issues in Obtaining High-Quality Impact Evaluations, 199 Improving Organizational Learning, 208 Conclusions, 216 References, 217 9 An Evaluation Initiative to Support Learning the Impact of USAID’s Dg Programs Introduction, 219 Providing Leadership and Strategic Vision, 220 Implementing the Vision: The Evaluation Initiative, 222 Agenda for USAID and SORA, 228 Role of Congress and The Executive Branch, 230 Conclusions, 232 References, 232 glossary Appendixes
BACkgROUND Over the past 25 years, the United States has made support for the spread of democracy to other nations an increasingly important element of its national security policy. Many other multilateral agencies, countries, and nongovernmental organizations (NGOs) also are involved in providing democracy assistance. These efforts have created a growing demand to find the most effective means to assist in building and strengthening democratic governance under varied conditions.
Within the U.S. government the U.S. Agency for International Development (USAID) has principal responsibility for providing democracy assistance. Since 1990, USAID has supported democracy and governance (DG) programs in approximately 120 countries and territories, spending an estimated total of $8.47 billion (in constant 2000 U.S. dollars) between 1990 and 2005. The request for DG programs for fiscal year 2008 was $1.45 billion, which includes some small programs in the U.S. Department of State.
Despite these substantial expenditures, our understanding of the actual impacts of USAID DG assistance on progress toward democracy remains limited—and is the subject of much current debate in the policy and scholarly communities. Admittedly, the realities of democracy programming are complicated, given the emphasis on timely responses in politically sensitive environments and flexibility in implementation to account for fluid political circumstances. These realities pose particular challenges for the evaluation of democracy assistance programs. NonetheIMPROVING DEMOCRACY ASSISTANCE less, USAID seeks to find ways to determine which programs, in which countries, are having the greatest impact in supporting democratic institutions and behaviors and how those effects unfold. To do otherwise would risk making poor use of scarce funds and to remain uncertain about the effectiveness of an important national policy.
Yet USAID’s current evaluation practices do not provide compelling evidence of the impacts of DG programs. While gathering valuable information for project tracking and management, these evaluations usually do not collect data that are critical to making the most accurate and credible determination of project impacts—such as obtaining baseline measures of targeted outcomes before a project is begun or tracking changes in appropriately selected (or assigned) comparison groups to serve as a control or reference group.
USAID has been seeking better evidence for the effects of its DG projects. In 2000 the Office of Democracy and Governance created the Strategic and Operational Research Agenda (SORA). Under SORA, USAID has commissioned studies of its DG evaluations and underwritten a recent cross-national study of the effects of its democracy assistance programs since 1990. A very encouraging finding from that study is that democracy assistance does matter for democratic progress. The study (Finkel et al 2007; see also the second-phase study, Finkel et al 2008) found that, when controlling for a wide variety of other factors, higher levels of democracy assistance are, on average, associated with movement to higher levels of democracy. These results provide the clearest evidence to date that democracy assistance contributes toward achieving its desired goals.
Unfortunately, it is also true that in a number of highly important cases—such as Egypt and post-Soviet Russia—large volumes of democracy assistance have yielded disappointing results. In addition to knowledge about general effects, USAID needs to know the positive or negative effects of specific projects and why DG assistance has been more successful in some contexts than in others. SORA turned to the National Research Council (NRC) for assistance in how to gain greater insight into which democracy assistance projects are having the greatest impacts. This report is intended to provide a road map to enable USAID and its partners to build, absorb, and act on improved knowledge about assisting the development of democracy in a variety of contexts.
1. A refined and clear overall research and analytic design that integrates the various research projects under SORA into a coherent whole in order to produce valid and useful findings and recommendations for democracy program improvements.
2. An operational definition of democracy and governance that disaggregates the concept into clearly defined and measurable components.
3. Recommended methodologies to carry out retrospective analysis.
The recommendations will include a plan for cross-national case study research to determine program effectiveness and inform strategic planning. USAID will be able to use this plan as the basis of a scope of work to carry out comparative retrospective analysis, allowing the agency to learn from its 25 years of investment in DG programs.
4. Recommended methodologies to carry out program evaluations in the future. The recommendations for future analysis will focus on more rigorous approaches to evaluation than currently used to assess the impact of democracy assistance programming. They should be applicable across the range of DG programs and allow for comparative analysis.
5. An assessment of the feasibility of the final recommended methodologies within the current structure of USAID operations and defining policy, organizational, and operational changes in those operations that might improve the chances for successful implementation.
OvERALL RESEARCH AND ANALyTIC DESIgNIn response to the first charge, the committee unanimously recommends a four-part strategy for gaining increased knowledge to support
USAID’s DG policy planning and programming. These are:
Recommendation 1: Undertaking a pilot program of impact evaluations designed to demonstrate whether such evaluations can help USAID determine the effects of its DG projects on targeted policy-relevant outcomes. A portion of these impact evaluations should use randomized designs since, where applicable and feasible, they are the designs most likely to lead to reliable and valid results in determining project effects and because their use in DG projects has been limited. USAID should begin the pilot program by focusing on a few widely used DG program categories. The pilot evaluations should not supplant current evaluations and assessments, but impact evaluations could gradually become a more important part of USAID’s portfolio of monitoring and evaluation (M&E) activities as the agency gains experience with such evaluations and determines their value. (See Chapters 5 through 7 for a discussion of impact evaluations and how they might be applied to DG projects and Chapter 9 for the committee’s recommendations.)
IMPROVING DEMOCRACY ASSISTANCERecommendation 2: Developing more transparent, objective, and widely accepted indicators of changes in democratic behavior and institutions at the sectoral level—that is, at the level of such sectors as the rule of law, civil society, government accountability, effective local government, and quality of elections. Current aggregate national indicators of democracy, such as Freedom House or Polity scores, are neither at the right level for identifying the impacts of particular USAID DG projects nor accurate and consistent enough to track modest or short-term movements of countries toward or away from greater levels of democracy. (See Chapter 3.) Recommendation 3: Using more diverse and theoretically structured clusters of case studies of democratization and democracy assistance to develop hypotheses to guide democracy assistance planning in a diverse range of settings. Whether USAID chooses to support such studies or gather them from ongoing academic research, it is important to look at how democracy assistance functions in a range of different initial conditions and trajectories of political change. Such case studies should seek to map out long-term trajectories of political change and to place democracy assistance in the context of national and international factors affecting those trajectories, rather than focus mainly on specific democracy assistance programs. (See Chapter 4.) Recommendation 4: Rebuilding USAID’s institutional mechanisms for absorbing and disseminating the results of its work and evaluations, as well as its own research and the research of others, on processes of democratization and democracy assistance. In recent years, USAID has lost much of its capacity to assess the impact and effectiveness of its programs. Without an active program of organizational learning so that senior personnel and DG officers have structured opportunities to discuss the results of pilot evaluations, compare their experiences with DG programs, and discuss the research carried out by USAID and especially other scholars, implementers, and donors, the fruits of the committee’s first three recommendations will not be usefully integrated with the experience of DG officers in a way that will improve DG program planning, design, and outcomes. (See Chapters 8 and 9.)
Recommendation 1: Undertaking a Pilot Program of Impact Evaluations Charges 4 and 5 asked the committee to recommend methodologies for future program evaluations and to evaluate their feasibility. These issues are addressed first, however, because the committee believes that, among the charges it was given, improving USAID’s ability to more precisely ascertain the effects of future DG programs has more potential to build knowledge of what works best in DG programming than either retrospective analyses (given the limits found in the collection of data on past DG projects) or improving the definition of democracy. The committee thus investigated USAID’s current evaluation methods and explored a range of designs for improved evaluations that could be applied to DG projects. The committee also commissioned teams of consultants to visit three diverse missions—in Albania, Peru, and Uganda—to assess the feasibility of applying those designs—in particular impact evaluations—to actual ongoing or planned DG projects. Of course, these evaluations, like all of USAID’s evaluations and research, must be part of a broader learning strategy if the agency is to benefit; these organizational aspects are discussed separately below.
What Are Impact Ealuations?
Most current evaluations of USAID DG projects, while informative and serving varied purposes for project managers, lack the designs or data needed to provide compelling evidence of whether those projects had their intended effects. An impact ealuation aims to separate the effects of a specific DG project from the vast range of other factors affecting the progress of democracy in a given country and thus to make the most precise and credible determination of how much DG projects contribute to desired outcomes.