«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
• Some donors and international agencies are beginning to implement more impact evaluations. Nonetheless, considerable concerns and skepticism remain regarding the feasibility and appropriateness of applying impact evaluations to DG projects. These need to be taken seriously and addressed in any effort to introduce them to USAID.
• Current practices regarding measurement and data collection show a tendency to emphasize collection of output measures rather than policyrelevant outcome measures as the core of M&E activities. There is also a tendency, in part because of the lack of good meso-level indicators, to judge the success of DG programs by changes in macro-level measures of a country’s overall level of democracy, rather than by achieving outcomes more relevant to a project’s plausible impacts.
• Much useful information aside from evaluations, such as survey data and reports, detailed spending breakdowns, and mission director and DG staff reports, remains dispersed and difficult to access.
• USAID has made extensive investments in developing outcome measures across all its program areas; these provide a sound basis for improving measurements of the policy-relevant effects of DG projects.
• Once completed, there are few organizational mechanisms for broad discussion of USAID evaluations among DG officers or for integraEVALUATION IN USAID DG PROGRAMS tion of evaluation findings with the large range of research on democracy and democracy assistance being carried on outside the agency.
• Many of the mechanisms and opportunities for providing organizational learning were carried out under the aegis of the CDIE. The dissolution of this unit, combined with the longer term decline in regular evaluation of projects, means that USAID’s capacity for drawing and sharing lessons has disappeared. The DG office’s own efforts to provide opportunities for DG officers and implementers to meet and learn from one another and outside experts have also been eliminated.
• Evaluation is a complex process, so that improving the mix of evaluations and their use, and in particular increasing the role of impact evaluations in that mix, will require a combination of changes in USAID practices. Gaining new knowledge from impact evaluations will depend on developing good evaluation designs (a task that requires special skills and expertise), acquiring good baseline data, choosing appropriate measures, and collecting data on valid comparison groups. Determining how to feasibly add these activities to the current mix of M&E activities will require attention to the procedures governing contract bidding, selection, and implementation. The committee’s recommendations for how USAID should address these issues are presented in Chapter 9.
Moreover, better evaluations are but one component of an overall design for learning, as making the best use of evaluations requires placing the results of all evaluations in their varied contexts and historical perspectives. This requires regular activities within USAID to absorb and disseminate lessons from case studies, field experience, and research from outside USAID on the broader topics of democracy and social change.
The committee’s recommendations on these issues are presented in Chapter 8.
These recommendations are intended to improve the value of USAID’s overall mix of evaluations, to enrich its strategic assessments, and to enhance its capacity to share and learn from a variety of sources—both internal and from the broader community—about what works and what does not in efforts to support democratic progress.
Asia Foundation. 2007. Afghanistan in 2007: A Survey of the Afghan People. Available at:
http://www.asiafoundation.org/pdf/AG-surey0.pdf. Accessed on February 23, 2008.
Banerjee, A.V. 2007. Making Aid Work. Cambridge, MA: MIT Press.
Bollen, K., Paxton, P., and Morishima, R. 2005. Assessing International Evaluations: An Example from USAID’s Democracy and Governance Programs. American Journal of Ealuation 26:189-203.
IMPROVING DEMOCRACY ASSISTANCE CIDA (Canadian International Development Agency). 2007. Results-Based Management in CIDA: An Introductory Guide to the Concepts and Principles. Available at: http://www.
acdi-cida.gc.ca/CIDAWEB/acdicida.nsf/En/EMA--PPK#. Accessed on September 12, 2007.
Clapp-Wincek, C., and Blue, R. 2001. Ealuation of Recent USAID Ealuation Experience. Washington, DC: USAID, Center for Development Information and Evaluation.
Danish Ministry of Foreign Affairs. 2005. Peer Assessment of Ealuation in Multilateral Organizations: United Nations Deelopment Programme, by M. Cole et al. Copenhagen: Ministry of Foreign Affairs of Denmark.
DfID (Department for International Development). 2004. Public Information Note: Drivers of Change. Available at: http://www.gsdrc.org/docs/open/DOC.pdf. Accessed on September 16, 2007.
Green, A.T., and Kohl, R.D. 2007. Challenges of Evaluating Democracy Assistance: Perspectives from the Donor Side. Democratization 14(1):151-165.
House of Commons (Canada). 2007. Adancing Canada’s Role in International Support for Democratic Deelopment. Ottawa: Standing Committee on Foreign Affairs and International Development.
Jacquet, P. 2006. Evaluations and Aid Effectiveness. In Rescuing the World Bank: A CGD Working Group Report and Collected Essays, N. Birdsall, ed. Washington, DC: Center for Global Development.
Kessler, G. 2007. Where U.S. Aid Goes Is Clearer, But System Might Not Be Better. Washington Post, p. A1.
McFaul, M. 2006. The 00 Presidential Elections in Ukraine and the Orange Reolution: The Role of U.S. Assistance. Washington, DC: USAID, Office for Democracy and Governance.
McMurtry, V.A. 2005. Performance Management and Budgeting in the Federal Goernment: Brief History and Recent Deelopments. Washington, DC: Congressional Research Service.
Management Systems International. 2000. Third Annual Performance Measurement Survey: Data Analysis Report. USAID/Mali Democratic Governance Strategic Objective.
Millennium Challenge Corporation. 2007. Fiscal Year 2007 Guidance for Compact Eligible Countries, Chapter 29, Guidelines for Monitoring and Evaluation Plans, p. 19. Available at: http://www.mcc.go/countrytools/compact/fy0guidance/english/-guidelinesformande.
pdf. Accessed on September 12, 2007.
OECD (Organization for Economic Cooperation and Development). 2005. Lessons Learned on the Use of Power and Drivers of Change Analyses in Development Operation.
Review commissioned by the OECD DAC Network on Governance, Executive Summary. Available at: http://www.gsdrc.org/docs/open/DOC.pdf. Accessed on September 12, 2007.
Sarles, M. 2007. Evaluating the Impact and Effectiveness of USAID’s Democracy and Governance Programmes, in Ealuating Democracy Support: Methods and Experiences, P.
Burnell, ed. Stockholm: International Institute for Democracy and Electoral Assistance and Swedish International Development Cooperation Agency.
Savedoff, W.D., Levine, R., and Birdsall, N. 2006. When Will We Eer Learn? Improing Lies Through Impact Ealuation. Washington, DC: Center for Global Development.
Schmid, A. 2007. Measuring Development. Available at: http://www.gtz.de/de/dokumente/ELRen-0-.pdf. Accessed on September 12, 2007.
Shadish, W.R., Cook, T.D., and Campbell, D.T. 2001. Experimental and Quasi-Experimental Designs for Generalized Causal Inference, 2nd ed. Boston: Houghton Mifflin.
USAID ADS. 2007. Available at: http://www.usaid.go/policy/ads/00/. Accessed on August 2, 2007.
USAID (U.S. Agency for International Development). 1997. The Role of Ealuation in USAID.
TIPS . Washington, DC: USAID.
EVALUATION IN USAID DG PROGRAMS USAID (U.S. Agency for International Development). 1998. Handbook of Democracy and Goernance Program Indicators. Washington, DC: Center for Democracy and Governance. USAID. Available at: http://www.usaid.go/our_work/democracy_and_goernance/ publications/pdfs/pnacc0.pdf. Accessed on August 1, 2007.
USAID (U.S. Agency for International Development). 2000. Conducting a DG Assessment:
A Framework for Strategy Deelopment. Available at: http://www.usaid.go/our_work/ democracy_and_goernance. Accessed on August 26, 2007.
USAID (U.S. Agency for International Development). 2006. U.S. Foreign Assistance Reform.
Available at: http://www.usaid.go/about_usaid/dfa/. Accessed on August 2, 2007.
USAID (U.S. Agency for International Development). 2007. Decentralization and Democratic Local Governance (DDLG) Handbook. Draft.
U.S. Department of State. 2006. U.S. Foreign Assistance Performance Indicators for Use in Developing FY2007 Operational Plans, Annex 3: Governing Justly and Democratically: Indicators and Definitions. Available at: http://www.state.go/f/releases/factsheets00/0.
htm. Accessed on August 25, 2007.
Wholey, J.S., Hatry, H.P., and Newcomer, K.E., eds. 2004. Handbook of Practical Program Evaluation, 2nd ed. San Francisco: Jossey-Bass.
World Bank. 2004. Monitoring & Ealuation: Some Tools, Methods, and Approaches. Washington, DC: World Bank.
de Zeeuw, J., and Kumar, K. 2006. Promoting Democracy in Postconflict Societies. Boulder:
INTRODUCTIONOne of the U.S. Agency for International Development’s (USAID) charges to the National Research Council committee was to develop an operational definition of democracy and governance (DG) that disaggregates the concept into clearly defined and measurable components. The committee sincerely wishes that it could provide such a definition, based on current research into the measurement of democratic behavior and governance. However, in the current state of research, only the beginnings of such a definition can be provided. As detailed below, there is as much disagreement among scholars and practitioners about how to measure democracy, or how to disaggregate it into components, as on any other aspect of democracy research. The result is that there exist a welter of competing definitions and breakdowns of “democracy,” marketed by rivals, each claiming to be a superior method of measurement, and each the subject of sharp and sometimes scathing criticism.
The committee believes that democracy is an inherently multidimensional concept, and that broad consensus on those dimensions and how
1 Helpful comments on this chapter were received from Macartan Humphreys, Fabrice
Lehoucq, and Jim Mahoney. The committee is especially grateful to those who attended a special meeting on democracy indicators held at Boston University in January 2007: David Black, Michael Coppedge, Andrew Green, Rita Guenther, Jonathan Hartlyn, Jo Husbands, Gerardo Munck, Margaret Sarles, Fred Schaffer, Richard Snyder, Paul Stern, and Nicolas van de Walle. See Appendix C for further information.
IMPROVING DEMOCRACY ASSISTANCE to aggregate them may never be achieved. Thus, if USAID is seeking an operational measure of democracy to track changes in countries over time and where it is engaged, a more practical approach would be to disaggregate the various components of democracy and track changes in democratization by looking at changes in those components.
Yet even for the varied components of democracy, there are no available measures that are widely accepted and have demonstrated the validity, accuracy, and sensitivity that would make them useful for USAID in tracking modest changes in democratic conditions in specific countries.
The development of a widely recognized disaggregated definition of democracy, with clearly defined and objectively measurable components, would be the result of a considerable research project that is yet to be done.