«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
Seligson, M.A. 2002. The Impact of Corruption on Regime Legitimacy: A Comparative Study of Four Latin American Countries. Journal of Politics 64:408-433.
Seligson, M.A. 2006. The Measurement and Impact of Corruption Victimization: Survey Evidence from Latin America. World Deelopment 34(2):381-404.
Seligson, M.A., and Recanatini, F. 2003. Governance and Corruption. Pp. 411-443 in Ecuador:
An Economic and Social Agenda in the New Millennium, V. Fretes-Cibils, M.M. Giugale, and J.R. López-Cálix, eds. Washington, DC: World Bank.
Creating the Conditions for Conducting High-Quality Evaluations of Democracy Assistance Programs and Enhancing Organizational Learning
ISSUES IN OBTAININg HIgH-QUALITy IMPACT EvALUATIONSAny changes made to the general guidance for monitoring and evaluation (M&E) of DG projects will be carried out in the field in over 80 country missions by hundreds of implementing partners. Even with the centralization of program and budget decision making undertaken in the Foreign Assistance Reforms of 2006 (USAID 2006), USAID remains a 00 IMPROVING DEMOCRACY ASSISTANCE highly decentralized agency, and country missions have substantial discretion in how they implement and manage their programs.
The committee also recognizes that the USAID contracting process is already dauntingly complex and time-consuming, demanding much of the time that DG officers spend to develop and manage their projects.
The committee thus is cautious about recommending specific solutions for the contracting of evaluations, especially as contract and procurement processes are not an area in which the committee has any special expertise. What follows is instead intended as a set of principles, drawn from research and field studies, that the committee believes will help USAID in obtaining sound impact evaluations of DG projects. Examples are offered of possible approaches to the problem, but the actual design and implementation of any changes would rest with USAID. Knowing how difficult the problems of changing contract management practices are with the current reality of USAID programming, the DG evaluation initiative recommended in the next chapter could be an opportunity to try out different approaches.
Incentive Issues A key problem, not unique to DG or USAID, is the question of providing incentives to DG staff and implementers to undertake and complete sound and credible impact evaluations. The DG officers and implementers the committee and its field teams met shared a strong desire to be successful in promoting democracy. They are drawn to their work because they believe that democracy is a better form of government and that foreign assistance can help bring about democratic development. The problem, however, is how to promote democracy. From the outset, DG officers and implementers alike recognized that “doing democracy” was going to be much more difficult than other areas in development such as health and agriculture where causal relationships are better understood and impacts easier to measure. There may be formidable barriers to good policy and implementation in these other areas, but at least there is greater consensus about the basic questions of theory and measurement.
The uncertainty about fundamental aspects of DG reinforces the normal human and bureaucratic incentives to avoid documented failure, a problem that has been cited as affecting evaluations across USAID and not simply DG (Clapp-Wincek and Blue 2001, Savedoff et al 2006). In the absence of a strong learning culture that encourages open reflection and recognizes the uncertainties surrounding DG programming, carrying out projects that produce no effect (or a negative effect) could understandably be considered a threat to a USAID officer’s career. Similarly, program implementers worry about their organizations’ futures and the results of 0
CREATING THE CONDITIONSbeing associated with a documented failure, knowing that it is generally not the way to win future contracts or grants. In the democracy promotion area, where there is little hard evidence about what works and why and where many crucial factors that might make for success or failure are beyond the control of DG officers and their implementers, there is a natural tendency to confine measurements of success to those things over which one has some hope of control, such as project outputs and very proximate outcomes.
In addition, a host of time and resource pressures generally lead implementers not to take time before program rollout to gather extensive baseline data or to conserve precious resources for actual DG program support by keeping evaluation costs to a minimum (or, as the committee discovered, sometimes using funds from the M&E budget to support programming in the later stages of a project when resources grew tight).
The clear priority for getting programs started as quickly as possible, and doing as much as possible with limited budgets, necessarily leads to a far lower priority for impact evaluation procedures, as these generally require some time and effort spent on collecting baseline data and data from comparison or control groups. Without strong incentives to complete sound impact evaluations on at least some DG programs and some rewards for doing so, these pressures make it highly unlikely that such evaluations will be designed into DG programs. One task of the Dg evaluation initiative recommended in the next chapter should be to address these issues and explore how to ease the task of undertaking impact evaluations within USAID’s contracting and program procedures. The initiative should also examine incentives for both Dg officers and Dg project implementers to carry out sound impact evaluations of selected Dg projects.
Coordination Issues Regarding Strategic Assessments USAID already undertakes a fairly time-consuming process of baseline assessment as part of its development of strategic objectives (see Chapter 2). At present, however, the strategic assessments guide policy planning (including choice of DG projects), which then result in calls for proposals. Evaluations enter later, if at all, in a way quite separate from the initial assessment process.
It would be far more productive for good impact evaluation if the strategic assessments also sought to identify which projects (if any) should be targeted for impact evaluations to determine their effects. Then any baseline information collected as part of the assessments could be designed, and made available, to support the desired impact evaluation.
For example, any national or regional surveys, or interviews with possible 0 IMPROVING DEMOCRACY ASSISTANCE or intended participants, could be usefully incorporated into subsequent evaluations. Perhaps even more important, the strategic assessment process must identify critical hypotheses guiding the planned democracy assistance program (e.g., that increasing local mobilization or nongovernmental organizations (NgOs) will reduce corruption), so that they can be clearly specified and designated for impact evaluations in the calls for proposals, if such evaluations are desired.
Contracting Issues The committee’s research and field visits also found that the current
process of awarding contracts and grants actually works against conducting impact evaluations in a number of specific ways:
• DG officers are chosen for expertise in democracy assistance and aid delivery, not for expertise in evaluation designs. Thus DG officers often felt they lacked expertise among their mission staff to prescribe or judge what would be an effective, high-quality impact evaluation design.
• Implementers, who often believed they had the expertise to undertake a richer variety of M&E activities, including impact evaluations, thought that USAID gave priority to doing the proposed work rather than M&E, and especially if budgets were tight, ambitious M&E plans would work against them in bidding for projects.
• Systematic communication among DG officers and between DG officers and implementers is limited, so there is little opportunity to share experiences and compare, and perhaps correct, perceptions of each other’s expectations.
• Given the multiple steps in the contracting/grant-making process, there are many points at which decisions can be made that restrict or eliminate the opportunity to design impact evaluations into projects from the outset or not to carry them out fully once a project has begun.
• On the positive side, the basic system for program monitoring and use of indicators in place through the Automated Directives System is a good foundation, even if current practice could be improved (USAID ADS 2007). Thus the data collection required for impact evaluations seems practical if the incentives and contract procedures motivate implementers to schedule baseline, outcome, and comparison group measurements as part of the contracted DG activity.
the need, as a separate contract issued after a project has begun or been completed or when a shift in strategy is contemplated, and commissioning an impact evaluation is that an impact evaluation needs to be treated as an integral part of a project’s implementation design. Unless baseline measurements are part of the contract schedule and data collection on an appropriate comparison or control group is provided for at project inception, it is difficult—often impossible—to go back and obtain such information once a project has begun or been completed. This means that if a mission wants to obtain sound evidence of the impact of a particular project, staff will need to think about planning an impact evaluation before they have even drawn up the call for proposals for that project and make a suitable design for impact evaluation part of the original action and budget plan for that project.
Call for Proposals When a USAID mission undertakes a new project or the next phase of a continuing one, in most cases there is a formal request for bids, called a Request for Proposals (RFP) for a contract and a Request for Applications (RFA) for a grant or cooperative agreement.1 One required component for those responding to an RFP or RFA is a description of how the project would be monitored and evaluated. Given the strict federal rules governing competitive procurement policies, the RFP/RFA is the primary source of information available to a would-be implementer about the mission’s goals for the project and requirements for a successful bidder, including M&E.
In current practice there is seldom any indication that an evaluation process is expected beyond the required Performance Monitoring Plans, which generally focus on tracking the project’s activities and immediate outputs. In addition, as the committee learned, DG officers differ in how much detailed guidance they want to provide in an RFP or RFA, sometimes preferring to give the implementers, who have substantive expertise and experience, flexibility to provide most of the details of how they think the project and M&E should be carried out.