«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
With 100 overseas missions, each with dozens of projects under way at any given time, it seems reasonable to conclude that millions of dollars is spent each year on M&E, broadly defined. As discussed, impact evaluations of project effects are one component of the broader M&E task, and it would not be simply a matter of transferring funds spent on one part of the M&E function to a different task. But if some of the current approaches to assessing project impact do not, in fact, provide genuine evidence of success or failure, it would seem that there are resources that could be more productively applied, even if no firm dollar amount can be provided for them. More generally, a serious examination of the balance of effort and resources among various types of evaluation, in particular that devoted to monitoring (outcome evaluation) relative to other forms that can inform strategic decisions and assessments of program impact, could be another part of the evaluation initiative recommended in the next chapter.
IMPROvINg ORgANIzATIONAL LEARNINg The results of sound impact evaluations have value for USAID only when they become readily accessible knowledge for USAID officers and that knowledge feeds into learning processes that inform policy and planning. This section looks at what happens to the results of evaluations and other data after they are obtained.
One finding from interviews in Washington and the field is that, more often than not, raw survey data, the basis on which key comparisons within and across countries could be made, are lost. USAID currently has no central repository for the survey data its implementers collect. Given that with only the rarest of exceptions survey data by definition are computerized and almost always stored in common formats (typically SPSS, Excel, STATA, or SAS) for which interchangeability programs (e.g., StatTransfer) are readily available, the labor costs and storage space requirements would be trivial. The committee recommends, as an initial step, that the Dg office develop a simple system to establish and maintain such an archive. To emphasize how basic the tasks are, the design could be created by a library information sciences graduate student working as an intern and then maintained by a junior administrative staff person.
Archiving the data, however, is far less of a problem than being sure that all of the data end up in Washington. Other studies of general USAID evaluation practices (Clapp-Wincek and Blue 2001) and the committee’s own DG-focused research found that despite requirements to do so, reports written by consultants and research organizations are not routinely sent to USAID Washington. For many years the Center for Development Information and Evaluation (CDIE) played the role of archivist for USAID. But even when CDIE was functioning, reporting was not systematic. Now that CDIE has been absorbed into the office of the new director of foreign assistance in the State Department, it is not clear how well the “collective memory” of USAID will continue to grow.
Ensuring that survey data are retained would probably require an executive decision at the bureau level or higher to impose an absolute contractual requirement that the data generated would be deposited with USAID Washington. The committee recognizes that the barriers to doing so are real, as many of USAID’s DG programs are carried out by consulting firms whose contractual clauses broadly prohibit the use of their data beyond the confines of the company. Finding ways to address these proprietary issues will be essential to supporting the learning culture this committee believes USAID needs to acquire.
Using Surveys More Systematically to Build a global knowledge Base To develop comparable data that can be regularly updated across the range of countries in which USAID operates, more attention needs to be paid to the systematic use of its survey data. The committee notes at the outset that the field of scientific survey research has been undergoing incremental refinement since its first use in the 1940s. Genuinely representative samples can be designed and survey data obtained at relatively modest cost, and questionnaires can be crafted that provide reliable and 0 IMPROVING DEMOCRACY ASSISTANCE valid measurement of citizens’ attitudes and behaviors. In practice, most USAID missions commission surveys in an ad hoc fashion that, coupled with the lack of agency-wide coordination of survey research methodology, data collection, and data analysis, means that USAID is not taking full advantage of the prospect for greater ability to develop comparability across surveys taken in many parts of the world.
As discussed in Chapter 7, surveys can be used in one form of impact evaluation design when randomization is not possible. Surveys also provide a powerful tool to test democratization hypotheses. Does corruption erode support for democracy? Do certain ethnic groups express more intolerance than others, participate less in civil society, or participate more in protest demonstrations? These are all important questions that can be asked of the Democracy Barometers surveys, and the answers can help target and adjust DG projects.
Surveys can be used to track project success over time. To refer again to civil society participation, if USAID establishes as a project goal increased participation in a given region or among females, then repeated surveys over time can help determine the extent to which those efforts have been successful. Comparisons within a country provide important information about project impact. But to obtain data that would allow for a more general comparative assessment of democratic values and practices, surveys from multiple countries are needed. USAID needs this comparative information to be able to make a determination of how advanced or hindered democratic behaviors and practices are in any given country.
For example, if it finds that corruption victimization affects 10 percent of the adult population in a given country in a single year, it needs to place these data alongside survey data obtained for other countries in order to determine if the 10 percent level is high, medium, or low.
As already mentioned, consortia of researchers around the world have been developing regional surveys of democratic values and behaviors. The earliest systematic surveys of entire regions emerged in Europe with the development of the Eurobarometer and since 2001 the emergence of the European Social Survey, which now covers 25 nations in the broadened European community. Other regions of the world also are covered by such surveys, including Eastern Europe, now included in the Eurobarometer; the New Europe Democracies Barometer, which covers much of the former Soviet Union and is currently based at the University of Aberdeen; the Asian Barometer, currently based at the National Taiwan University; and, most recently, the Arab Barometer, currently based at Princeton University and the University of Michigan.4 4 Recent studies by several of these democracy barometers can be found in the July 2007
To the committee’s knowledge, USAID has invested in two regional surveys: (1) the AfroBarometer, organized by Michigan State University and the Institute for Democracy in South Africa; and (2) the Americas Barometer, organized by the Latin American Public Opinion Project of Vanderbilt University and its partner university and think tanks in Latin America, led by the University of Costa Rica.
The committee believes that greater international coordination among existing surveys should be sought and supported. At present, even among the regional barometer surveys that USAID is partially funding, there is no central coordination across these two regions. Moreover, there are many countries in Africa in which the AfroBarometer does not operate, even though USAID does work there. At this time there is no assurance that the same core items will be asked in each region and country within Africa, nor is there any reason to believe that identical questions will be asked across regions. The committee recommends that USAID facilitate this sort of coordination among those regional surveys it is currently funding and also explore how it might promote such coordination with the Asian and Arab barometers. For example, a small conference could be held in Washington for the senior directors of these regional barometers to see if such coordination would be possible from administrative and financial points of view. It is obvious that within a region or country many items need to be unique to tap into the particularities of that region or country’s structure. Yet there is almost certainly a common core of items that could be asked that would work universally or nearly so.
Increasing Active Learning In addition to acquiring and storing information to shed light on DG program outcomes, another essential part of the committee’s recommendations is for USAID to increase its activities for actively sharing and discussing that information. The internal and external USAID Web sites and those of individual missions provide substantial amounts of information about DG projects and often furnish links to evaluations and efforts to derive “lessons learned.” Unfortunately, as with survey data, although all evaluations are supposed to be provided to the Development Experience Clearinghouse (DEC) and available on the Web, in practice a substantial fraction never make it out of implementer or mission files.5 In the absence of resources to pursue compliance with the requirement—and perhaps enforce some sanction for failure—the competing pressures of other tasks will mean that reporting remains a low priority. The committee believes 5 The DEC Web site is http://dec.usaid.go/ (accessed on August 4, 2007). An assessment of
that the results of the evaluations undertaken during the evaluation initiative recommended in the next chapter would have to be much more readily available to have the desired effect on future USAID programming.
The committee thus recommends that transmitting reports for DEC should be an important part of each project under the proposed evaluation initiative. More generally, as part of the initiative the resources of DEC should be augmented to help ensure that all project evaluation reports reach DEC so that they can be openly available.
The Internet offers remarkable access and opportunities, but to learn from experience, DG officers and implementers also need opportunities to meet and discuss their experiences on a regular basis. Starting in the mids, when a reorganization moved technical specialists from the regional bureaus to new “centers,” including a democracy center, annual meetings of DG officers from around the world were held with implementers in the form of “partners conferences,” which provided such opportunities. The meetings frequently included outside experts to supplement and support the learning process. CDIE also organized a series of programs that provided opportunities for USAID officers back in the United States on leave to be exposed to the latest evaluations emerging from the center. Topics generally reflected the annual USAID evaluation agenda.
A number of factors, including tight budgets for operating expenses and criticism of “extraneous” travel, have curtailed these events and a significant opportunity is being lost. The committee believes that increasing USAID’s capacity to learn what works and what does not should include provisions for regular face-to-face interactions among Dg officers, implementers, and outside experts to discuss recent findings, both from the agency’s own evaluations of all kinds and studies by other donors, think tanks, and academics. Videoconferencing and other advanced technologies can be an important supplement, but personal contact and discussion would be extremely important to share experiences of success and failure as the evaluation initiative goes forward.
This includes lessons about the effectiveness of DG projects and about successes and failures in implementing impact evaluations.
This type of meeting is especially important for ensuring that the varied insights derived from impact and process evaluations, academic studies, and examinations of democracy assistance undertaken by independent researchers, NGOs, think tanks, and other donors are absorbed, discussed, and drawn into USAID DG planning and implementation.