«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
• Although creating better measures at the sectoral level to track democratic change is a long-term process, there is no need to wait on such measures for determining the impact of USAID’s DG projects. USAID has already compiled an extensive collection of policy-relevant indicators to track specific changes in government institutions or citizen behavior, such as levels of corruption, levels of participation in local and national decision making, quality of elections, professional level of judges or legislators, or the accountability of the chief executive. Since these are, in fact, the policy-relevant outcomes that are most plausibly affected by Dg projects, the committee recommends that measurement of these factors rather than sectoral-level changes be used to determine whether the projects are having a significant impact on the various elements that compose democratic governance.
valuable in mapping out varied trajectories of political development and identifying the role that democracy assistance could play in such trajectories in relation to various actors and events.
Nonetheless, committee members were unable to agree on a firm recommendation that USAID should invest its own funds in such case studies since substantial case study research on democratization is being undertaken by academics and NGOs. To learn more about the role of its DG assistance projects in varied conditions and their role in varied trajectories of democratization, USAID could seek to gain from ongoing academic research. Since much potentially relevant academic research is not written for a policy audience, however, USAID would need to structure its interactions with researchers to ensure that it gains useful and relevant information.
Strategies for Implementation
• If USAID decides to invest in supporting case study research, the committee recommends using a competitive proposal solicitation process to elicit the best designs. USAID should not specify a precise case study design, but instead should specify key criteria that proposals must meet. These should include (1) the criteria for choosing cases should be explicit and theoretically driven; (2) the cases should include a variety of initial conditions or contexts in which USAID DG projects operate; (3) the cases should include at least one, if not several, countries in which USAID and other donors have made little or no investment in DG projects; and (4) the cases should include countries with varied outcomes regarding democratic progress or stabilization.
• In addition to case studies, a variety of other research methods, both formal and informal (including debriefings of USAID field officers, statistical analyses of international data, and surveys) can shed light on patterns of democratization as well as how DG projects actually operate in the field and how they are received. USAID should include these varied sources of information as part of the regular organizational learning activities the committee recommends next.
Recommendation 4: Rebuilding USAID’s Institutional Mechanisms for Learning Regardless of whether USAID conducts many or fewer impact evaluations and contracts for case studies or works with case studies funded by think tanks or other organizations, little of what is learned will effectively guide and improve DG programming without some mechanism within USAID for learning from its own and others’ research on democracy and democratization. For USAID to benefit from this committee’s proposed IMPROVING DEMOCRACY ASSISTANCE pilot study of impact evaluations, it will need to have regular means of disseminating the results of those and other evaluations throughout the agency and discussing the lessons learned from them. For USAID to benefit from ongoing academic research and the studies of DG assistance being undertaken by think tanks and NGOs, it will be necessary for the agency to organize regular structured interactions between such researchers and its DG staff.
While it will take some time for USAID to learn from undertaking the pilot impact evaluations, it will gain immediately from augmenting its overall learning activities and increasing opportunities for DG staff to actively engage with current research and studies on democratization.
Though some committee members believe that the impact evaluations will be more novel and instructive than most current case study and policy reports on democratization, several committee members wish to emphasize the considerable value to policymakers and DG officers of the many books, articles, and reports that have been prepared in recent years by academics, think tanks, and practitioners. Whatever the methodological flaws of these case studies and process evaluations from a rigorous social sciences perspective, the committee notes that this expanding literature has provided important lessons and insights for crafting effective DG programs. Thus the committee is unanimous in finding that a renewed emphasis on engaging USAID DG personnel in discussion and analysis of current research on democratization and democracy assistance—including both varied types of evaluations and a broad range of scholarship—would be worthwhile and should begin even before the pilot evaluations have been completed.
Unfortunately, in recent years USAID has substantially reduced its institutional mechanisms for creating, disseminating, and absorbing knowledge. The Center for Development Information and Evaluation (CDIE), which served as the hub of systemic evaluation for USAID aid projects, has been dissolved. Moreover, USAID’s support of conferences and learning activities for mission directors and DG staff to share experiences and discuss the latest research has declined. And although central collection of evaluations is already a requirement, in practice much useful information, including evaluations and other project documents, survey data and reports, and mission director and DG staff reports, remains dispersed and difficult to access.
mends that the steps below be undertaken by a special DG evaluation initiative led by a senior policymaker or official within USAID who will have the ability to recommend agency-wide changes, as many of the obstacles to improved learning about DG programs stem from agencywide procedures and organizational characteristics. While in some ways this will replace the capabilities lost with CDIE, in some ways the committee hopes the new initiative will go beyond that.
The committee’s charge is limited to recommendations for improving USAID’s ability to evaluate its DG projects, but the committee notes that there could be advantages to making this an agency-wide initiative.
USAID implements social programs in many parts of the agency, so the changes the committee recommends could yield much wider benefits.
A Dg EvALUATION INITIATIvE In support of Recommendations 1 and 4, the committee recommends that USAID develop a five-year DG evaluation initiative, led by a senior
USAID official and with special funding, for the following:
1. Undertaking Pilot Impact Evaluations The committee strongly recommends that to accelerate the building of a solid core of knowledge regarding project effectiveness, the Dg evaluation initiative should immediately develop and undertake a number of well-designed impact evaluations that test the efficacy of key project models or core development hypotheses that guide USAID Dg assistance. A portion of these evaluations should use randomized designs, as these are the most accurate and credible means of ascertaining program impact. As randomized designs have also been the most controversial, especially in the DG area, it would be most valuable for the evaluation initiative to help USAID gain experience with and determine the value of these designs for learning the impacts of DG projects.
By key models the committee refers to programs that (1) are implemented in a similar form across multiple countries and (2) receive substantial funding (e.g., local government support, civil society, judicial training). By core hypotheses the committee refers to assumptions guiding USAID program design that, whether drawn from experience or prevailing ideas about how democracy is developed and sustained, have not been tested as empirical propositions.
2. Increasing USAID’s Capabilities in Project Evaluation Supporting the Dg evaluation initiative with special, dedicated resources outside the usual project structure would be another signal of a strong commitment to change. It is also important that these IMPROVING DEMOCRACY ASSISTANCE resources and accompanying expertise in evaluation design be made available to missions implementing DG programs, so that more rigorous evaluations become an opportunity for missions to gain support, rather than an additional unfunded burden. Any changes to M&E of DG programs will be carried out in the field by over 80 missions and hundreds of implementing partners. Even with the centralization of program and budget decision making undertaken in the foreign assistance reforms of 2006, USAID is a highly decentralized agency and mission staff have substantial discretion in how they implement and manage their programs.
The initiative should thus make its resources and expertise available to mission directors who want its support in conducting impact evaluations or otherwise changing their mix of M&E activities, in order to make the initiative an asset to the Dg directors in the field rather than an additional unfunded burden.
3. Providing Technical Expertise In recent years as USAID has reduced the number of evaluations it conducts, the agency has also failed to hire experts in the latest evaluation practices to guide and oversee its contracting and research. The committee recommends that USAID acquire sufficient internal expertise in this area to both guide an initiative at USAID headquarters and provide advice and support to field missions, as a key element of the initiative.
4. Improving the Ease of Undertaking Impact Evaluations of Dg Projects While many evaluations are currently only sought well after a project has begun or even only after its completion, impact evaluations generally require before-and-after measures and data from comparison or control groups that should be designed into the program from its inception and often cannot be obtained at all once a program is well under way. Pressures to get projects under way, as well as many current contracting practices, thus work against implementing and sustaining impact evaluation designs. One task of the Dg evaluation initiative should be to address these issues and explore how to ease the task of undertaking impact evaluations within USAID’s contracting and program procedures. The initiative should also examine incentives for both Dg officers and project implementers to carry out sound impact evaluations of selected Dg projects.
for USAID. This group could play a useful role in advising on the design of the DG evaluation initiative, helping to work though issues that arise during implementation, and developing a peer review process for assessing the evaluations undertaken during the initiative.
6. Rebuilding Institutional Learning Capacity This initiative should be guided by a policy statement outlining the strategic role of developing USAID as a learning organization in the democracy sector. The committee believes that increasing USAID’s capacity to learn what works and what does not should include provisions for regular face-to-face interactions among Dg officers, implementers, and outside experts to discuss recent findings, both from the agency’s own evaluations of all kinds and studies by other donors, think tanks, and academics. Videoconferencing and other advanced technologies can be an important supplement, but personal contact and discussion would be extremely important to sharing experiences of success and failure as the evaluation initiative went forward. This includes lessons about both the effectiveness of DG projects and successes and failures in implementing impact evaluations.