«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
Given these differences in opinion and the need to acquire capacity and experience with using impact evaluations to learn the effects of DG programs, the committee unanimously recommends that USAID begin a pilot initiative designed to demonstrate whether such evaluations can help USAID determine the effects of its DG projects on policy-relevant outcomes. This initiative should include randomized studies and focus on DG projects that are in wide use or represent major investments for USAID; it should also offer expertise and support to missions and DG officers who wish to conduct varied forms of impact evaluations suited to learning about the impact of their programs.
• To better track democratic changes in countries for strategic assessment and policy planning, USAID and other national and international organizations providing democracy assistance should explore making a substantial investment in the systematic collection of democracy indicators at a disaggregated, sectoral level—focused on the components of democracy rather than (or in addition to) the overall concept. Rather than attempting to arrive at a single score capturing all elements of the quality of democracy in a country, this effort should focus on how to best map out a country’s politics along a number of discrete dimensions (e.g., civil liberty, transparency, judicial independence, checks on the executive). Such a disaggregated index would allow policymakers to clarify how, specifically, one country’s democratic features differ from others in the region or across regions and better identify how changes are occurring over time.
These measures should aim to be more transparent, objective, sensitive, and widely accepted than currently available measures of democracy, which have substantial flaws.
• To learn more about the role of its DG assistance projects in a broader range of settings and in varied trajectories of democratization, USAID should either sponsor or seek to gain from ongoing academic IMPROVING DEMOCRACY ASSISTANCE research a more diverse and more theoretically structured set of case studies of democracy assistance than it has developed in the past. The committee suggests that these case studies should examine countries in which USAID has invested substantially and in which it has invested little, countries in which democratization unfolded successfully and where it failed or was reversed, and countries that included a range of varied initial conditions in which DG assistance was offered (e.g., authoritarian or semiauthoritarian regimes, emerging or transitional democracies, and countries emerging from violent internal conflicts).
• To better translate learning into policy planning and effective management, USAID should rebuild its institutional mechanisms for absorbing and disseminating the results of its work and evaluations, as well as its own research and the research of others, on processes of democratization and democracy assistance. This should include conferences, panels, and other creative and active learning opportunities. These should include discussion of its own program evaluations and other research; debate on the work of academics, think tanks, and other donor organizations; and sharing of experiences among DG officers and implementers and other DG assistance providers.
The remainder of this report presents evidence that supports these conclusions and recommendations and offers additional specific recommendations for USAID actions to achieve them.
A Note on Evaluations Because the main task given the committee by SORA was to provide guidance to USAID on how to determine the effects of its DG programs, this report spends considerable time discussing issues of evaluation design. This is because for the specific task of determining a project’s true effects, there is no substitute for a well-designed impact evaluation.
Some of this discussion (especially in Chapters 5 and 6) is quite technical because the issues of evaluation design are complicated, especially when dealing with many of the conditions in which USAID must actually work, where USAID does not control the assignment of assistance, conditions are rapidly changing, and pressures from many diverse sources affect programming.
The committee stresses that in its discussion of evaluation practices the committee is not breaking new ground methodologically. If the purpose of an evaluation is to provide evidence that a project has had its intended impact, there is a consensus in the social sciences and program evaluation research communities about the methods that will provide the most confidence in making those judgments (Cook and Campbell 1979, Shadish et al 2001, Wholey et al 2004). Moreover, the committee’s recomDEMOCRACY ASSISTANCE AND USAID mendations regarding evaluations, and the emphasis on the potential value of undertaking more impact evaluations of aid programs as the best way to improve aid effectiveness, are not unique. Instead, they align with a growing number of recommendations from private foundations, think tanks, and donor governments that have urged greater efforts in exploring the use of impact evaluations to improve DG and other types of foreign assistance.11 It is also recognized, however, that some of the evaluation procedures that the committee (as well as other groups and reports) recommends have not been widely employed in some sectors of the development community, especially in the area of democracy and governance. In fact, as noted above, the committee is aware of significant skepticism among policy professionals and academics regarding the feasibility and appropriateness of applying so-called scientific or randomized evaluation procedures to democracy assistance programming.12 Perhaps the most important source of skepticism is the belief that applying rigorous impact evaluation procedures to DG programs is impractical given the actual conditions of designing and implementing DG assistance. Committee member Larry Garber strongly noted this point. Or the restrictions on who receives DG programming that is sometimes necessary in order to conduct a rigorous impact evaluation may be considered an unethical failure to respond to an urgent need.
The committee took these objections seriously. What the committee thinks is unique about this project is that we are not drawing on only academic practices or the ideal of how project evaluation should proceed.
The committee commissioned fieldwork in three countries—Albania, Peru, and Uganda—to explore the feasibility and desirability of changing evaluation procedures to produce stronger evidence of whether projects were having their intended impact. Independent consultants—chosen for their academic expertise, expertise in the countries or regions visited, and experience in either doing DG-relevant research in the field using the proposed methods or in working with USAID on other aspects of project evaluation—were hired to work with mission DG staff in discussing the potential for revised evaluation procedures.13 11 These are described in a more detailed discussion of evaluation practices in Chapter 2.
12 See,for example, the commentary in Banerjee (2007); see also Cook (2006), Davidson (2006), and Scriven (2006). White (2006, 2007) has argued that the portion of development aid that can be subject to randomized impact evaluation is severely limited.
13 The consultants’ biographies can be found in Appendix E, along with the major findings from the visits. The teams were accompanied by a USAID DG staff officer from the SORA project and NRC professional staff members. Following the three field visits, the committee convened a public session at its July 2007 meeting to discuss the findings of the field visits with USAID and a number of DG implementers.
IMPROVING DEMOCRACY ASSISTANCE This significantly expanded the range of views and experience available to the committee and, it is hoped, added greater realism to the eventual findings and recommendations. This report uses that field experience to address the most frequently voiced objections regarding the application of more rigorous evaluation procedures to DG programs (see Chapter 6 in particular). In addition, because it is recognized that “best-case” scenarios for employing impact evaluations often cannot be realized, Chapter 7 discusses a large number of “next-best” procedures and practical modifications of DG evaluation practices.
Finally, because only actual experience with using the methods in the field on actual DG projects can truly address the skepticism and concerns about more rigorous evaluations, and because current USAID and implementer capabilities to undertake these methods are limited and would need to be developed, the committee’s actual recommendations are modest and cautious. The committee proposes that a number of impact evaluations, particularly randomized designs, be tested initially through a special initiative aimed at a limited number of thoughtfully chosen DG projects to demonstrate the feasibility and value of such impact evaluations for guiding DG programming.
While this report places great stress on opportunities to build knowledge through exploring the use of impact evaluations, the committee realizes that building knowledge requires more than just efforts to acquire information. The committee therefore recommends that efforts to improve DG project evaluations be part of a broader initiative to restore and augment USAID’s capacity as a learning organization. This initiative should create ongoing programs to involve DG officers throughout the agency in discussion and analysis of research on DG assistance generated inside and outside the agency, including case studies, academic research, and the work of NGOs and other donors. The key to this effort will be the degree to which USAID staff and key implementers are involved in ongoing efforts to share and disseminate their experience, and draw on a variety of sources, to inform program planning and execution.
Plan of the Report The chapters in this report provide supporting analysis that underpins the committee’s major recommendations. Chapter 2 reviews and assesses current approaches to monitoring and evaluation used by USAID in the context of current evaluation practice in the development assistance community. It distinguishes among various kinds of evaluations for various purposes and discusses how properly designed impact evaluations could make an important addition to USAID’s current mix of monitoring and evaluation practices.
DEMOCRACY ASSISTANCE AND USAIDChapter 3 reviews and analyzes current approaches to measuring democracy and their limitations for USAID’s strategic assessment and tracking needs. The analysis draws in part on a workshop held at Boston University in January 2007 to explore the current “state of the art” in the indicators used to track and assess the status of democracy and governance in countries over time (see Appendix C for further information).
A somewhat technical chapter, it offers a plan for improving such measures by focusing on measurements at the level of sectoral components of democracy and argues for the need for USAID—either alone or in conjunction with other U.S. government or international agencies—to lead a research project to develop more credible, transparent, objective, and widely accepted measures to track democratic change than current indicators provide. Many of the terms used in this chapter and in Chapters 5 through 7 are defined in the Glossary at the end of this report.
Chapter 4 examines the lessons that can be derived from historical case studies of democratization and democracy assistance and offers suggestions about how USAID can gain more extensive and theoretically structured case studies that would examine the role of democracy assistance in diverse trajectories of democratic development. It draws on a second workshop in March 2007 cosponsored with the Center for Democracy, Development, and the Rule of Law at Stanford University (see Appendix D for further information), which focused on insights that current academic research could provide about democratic transitions and consolidations as a foundation for understanding the potential contributions of democracy assistance.