«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
In DG alone, thousands of indicators are used every year to track project performance. Most of them are related to the outputs of specific activities or very proximate project outcomes. This process, supplemented by occasional evaluations, constitutes the largest portion of what USAID refers to as “monitoring and evaluation.” The results of this process are that USAID DG missions spend a large amount of time and money acquiring and transmitting the most basic accounting-type information on their projects (what is described above as “output” evaluations); far less time and money are spent in determining which projects really work and how efficient they are at producing desired results.
In January 2006, Secretary of State Condeleeza Rice initiated a series of reforms, centered on the budget and program planning process, intended to bring greater coherence to U.S. foreign assistance programs (USAID 2006). As part of these reforms the USAID administrator was designated the director of foreign assistance (DFA) and provided with a staff in the State Department to supplement the staff of USAID in implementing the reforms. Instead of a largely bottom-up process that collected, coordinated, and eventually reconciled budget and program requests from individual offices and missions, the new F Process exercised an unprecedented degree of centralized control, setting common objectives for State and USAID and bringing most budget and programming decisions to Washington.8 Eventually, the first joint State-USAID budget was submitted to Congress for FY2008, with significant changes in aid allocations for a number of countries (Kessler 2007).
Creation of the DFA structure in the State Department led to the dissolution of the separate policy planning apparatus in USAID. As part of this change, the Center for Development Information and Evaluation (CDIE), which served as a clearinghouse for all evaluations in USAID and had also commissioned the series of independent evaluations of USAID DG programs discussed above, was dissolved and its personnel were transferred into the new DFA Office of Strategic Information in the State Department.
The F Process also resulted in the creation of a set of common indicators collected for all programs in all missions. Most of these are output measures, which for the first time provided a comprehensive look at 8A number of projects, however, including the MCC and the President’s Emergency Fund
USAID activities worldwide (U.S. Department of State 2006). Their use in DG is examined in greater detail below. While these output indicators are designed to reflect the overall level of USAID DG activity in a country, they are not intended to provide a strategic assessment of levels of democracy in a country or evidence of the impact of specific DG projects.
Any recommendations for changing the approach to evaluation of DG programs will have to operate within this broader context in USAID and the wider donor community. Within USAID the GPRA-required structure of SOs for programs and performance monitoring for projects is a legal mandate that USAID can adapt but not eliminate. How much of the F Process will endure is unclear at present, but it does illustrate how much can happen—and how quickly—with high-level leadership and support.
THREE kEy PROBLEMS WITH CURRENT USAID
MONITORINg AND EvALUATION PRACTICESFocusing on Appropriate Measures Regarding Dg Activities As noted above, USAID has developed many good indicators to track the results of its DG projects. USAID is clearly aware of the important differences between various levels of indicators—those dealing with attaining targeted outputs, those dealing with the institutional or behavioral changes sought by the program, those dealing with broad sectoral changes at the country level, and those dealing with national levels of democracy. The Handbook of Democracy and Goernance Program Indicators, developed by the Center for Democracy and Governance (USAID 1998) as part of the implementation of GPRA, is the most comprehensive collection of indicators in this area of which the committee is aware. It sets forth detailed suggestions on how to measure outputs and outcomes in the four areas of concern to the DG office: rule of law, elections and political processes, civil society, and governance. It provides a valuable resource to missions and subcontractors as they develop appropriate indicators to assess the impact of specific programs in these sectors.
The development of output measures, especially in some program areas, has continued. The following is taken from the draft of a handbook on support for decentralization programming, currently being prepared
for use by USAID:
A distinction should be drawn at the outset between two different kinds of M&E [monitoring and evaluation] activities. One kind of M&E seeks to assess progress on program implementation, that is, the process of implementing decentralization reforms. To this end, one might gather and analyze data on what are sometimes called output indicators: the number of meetings and workshops held, officials trained, and so on.
EVALUATION IN USAID DG PROGRAMS These kinds of indicators can help to document whether necessary steps are being taken towards effective support of decentralization programs, and they may be especially useful as management tools for program implementation.
Another kind of M&E, however, seeks to assess the impact of decentralization programming on the broader goals described in this handbook:
enhancing stability, promoting democracy, and fostering economic development. The key questions are whether and how we can attribute outcomes along these dimensions, or aspects of these dimensions, to the effect of USAID initiatives in support of decentralization programming.
This kind of M&E is crucial, for it is the only way to assess what works and what does not in decentralization programming. (USAID 2007) A few of the democracy indicators recommended by this handbook
• Ease with which political parties can register to participate in elections;
• Ability of independent candidates to run for office;
• Number of human rights violations, as tracked by civil society organizations (CSOs) or ombudsman’s office;
• Proportion of citizens who positively evaluate government responsiveness to their demands;
• Existence of competitive local elections;
• Percentage of total subnational budget under the control of participatory bodies.
USAID has also funded various agencies to collect valuable data on outcome indicators. For example, a recent national survey in Afghanistan conducted by the Asia Foundation (2007) and underwritten by USAID
collected data on the following indicators and many others:
• Do you agree or disagree with the statement that some people make: “I don’t think the government cares much about what people like me think.”
• How would you rate the security situation in your area: Excellent, good, fair, or poor?
• Compared to a year ago, do you think the amount of corruption overall in your neighborhood has increased, stayed the same, or decreased? In your province? In Afghanistan as a whole?
• Would you participate in the following activities with no fear, some fear, or a lot of fear: voting, participating in a peaceful demonstration, running for public office?
0 IMPROVING DEMOCRACY ASSISTANCE Such survey questions make excellent baseline indictors on outcome measures for many DG assistance projects. USAID could then survey assisted and nonassisted groups on the same questions a year later to help determine the impact of DG assistance. This is an example where USAID can make use of extant surveys that already provide baseline data on a variety of relevant outcome measures.
A more centralized set of indicators was developed as part of the F Process. As mentioned above, the Foreign Assistance Performance Indicators are intended to measure “both what is being accomplished with U.S.
foreign assistance funds and the collective impact of foreign and hostgovernment efforts to advance country development” (U.S. Department of State 2006). Indicators are divided into three levels: (1) the Objective level, which are usually country-level outcomes, as collected by other agencies such as the World Bank, United Nations Development Program, and Freedom House; (2) the Area level, measuring performance of subsectors such as “governing justly and democratically,” which captures most of the objectives pursued by the DG office; and (3) the Element level, which seeks to measure outcomes that are directly attributable to USAID programs, projects, and activities, using data collected primarily by USAID partners in the field (U.S. Department of State 2006).
Clearly, USAID has taken the task of performance-based policymaking seriously. The central DG office, the various missions throughout the world, and the implementers who support USAID’s work in the field are all acutely aware of the importance of measurement and the various obstacles encountered. The concerns the committee heard were often not that USAID lacks the right measures to track the outcomes of its programs. Although this can be a major problem for some areas of DG, the committee also saw evidence that USAID field missions and implementers have, and seek to use, appropriate measures for program outcomes.
Rather, the problem is that the demands to supply detailed data on basic output measures or to show progress on more general national-level measures overwhelm or sidetrack efforts that might go into collecting data on the substantive outcomes of projects.
Matching Tasks with Appropriate Measurement Tools Broadly speaking, USAID is concerned with three measurementrelated tasks: (1) project monitoring, (2) project evaluation, and (3) country assessment. The first concerns routine oversight (e.g., whether funds are being properly allocated and implementers are adhering to the terms of a contract). The second concerns whether the program is having its intended effect on society. The third concerns whether a given country EVALUATION IN USAID DG PROGRAMS is progressing or regressing in a particular policy area with regard to democratization (USAID 2000).
Corresponding to these different tasks are three basic types of indicators: outputs, outcomes, and meso- and macro-leel indicators. Output measures track the specific activities of a project, such as the number of individuals trained or the organizations receiving assistance. Outcome measures track policy-relevant factors that are expected to flow from a particular project (e.g., a reduction in corruption in a specific agency, an increase in the autonomy and effectiveness of specific courts, an improvement in the fairness and accuracy of election vote counts). Meso- and macro-level measures are constructed to assess country-level features of specific policy areas and are often at levels of abstraction that are particularly difficult to determine with any exactness. Examples include “judicial autonomy,” “quality of elections,” “strength of civil society,” and “degree of political liberties.” For purposes of clarification, these concepts are included, along with an illustrative example, in Table 2-1.
As noted, USAID has made extensive efforts to identify indicators at all levels and across a wide range of sectors of democratic institutions.
Nonetheless, in practice a mismatch often arises between the chosen measurement tools and the tasks these tools are expected to perform.
Two problems, in particular, stand out. First, based on the committee’s discussions with USAID staff and implementers and further discussions and reviews of project documents during the three field visits described
in Chapter 7, there is continuing concern that the effectiveness of specific USAID DG projects should not be judged on the basis of meso- or macrolevel indicators, such as the overall quality of elections or even changes in national-level indicators of democracy. Second is whether current practices lead to overinvestment in generating and collecting basic output measures, as opposed to policy-relevant indicators of project results.