«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
The committee sees an enormous opportunity for USAID to accelerate its learning and the effectiveness of its programming by learning through the proposed evaluation initiative whether and how impact evaluations could be applied to DG projects. More broadly, leadership that creates a strong expectation that high-quality evaluations are critical to USAID’s future missions could improve USAID’s global leadership in gaining knowledge about democracy promotion, give heightened credibility to USAID’s relations with Congress, and—the committee believes—contribute greatly to achieving USAID’s goals of supporting the spread and strengthening of democratic polities throughout the world.
REFERENCESAl-Momani, M.H. 2003. Financial Transfer and Its Impact on the Level of Democracy: A Pooled Cross-Sectional Time Series Model. Unpublished Ph.D. thesis, University of North Texas.
Azpuru, D., Finkel, S., Pérez-Liñán, A., and Seligson, M.A. 2008. Trends in Democracy Assistance: What Has the United States Been Doing? Journal of Democracy 91(2):150-159.
Blair, H., and Hanson, G. 1994. Weighing in on the Scales of Justice: Strategic Approaches for Donor-Supported Rule of Law Programs. USAID Program and Operations Assessment Report No. 7. Washington, DC: USAID Center for Development Information and Evaluation. Available at: http://www.usaid.go/our_work/democracy_and_goernance/publications/ pdfs/pnaax0.pdf. Accessed on August 18, 2007.
AN EVALUATION INITIATIVE TO SUPPORT LEARNINGBollen, K., Paxton, P., and Morishima, R. 2005. Assessing International Evaluations: An Example from USAID’s Democracy and Governance Programs. American Journal of Ealuation 26:189-203.
Epstein, S., Serafino, N., and Miko, F. 2007. Democracy Promotion: Cornerstone of U.S. Foreign Policy? Washington, DC: Congressional Research Service.
Finkel, S.E., Pérez-Liñán, A., and Seligson, M.A. 2007. The Effects of U.S. Foreign Assistance on Democracy Building, 1990-2003. World Politics 59(3):404-439.
Finkel, S.E., Pérez-Liñán, A., Seligson, M.A, and Tate, C.N. 2008. Deepening Our Understanding of the Effects of U.S. Foreign Assistance on Democracy Building: Final Report.
Available at: http://www.LapopSureys.org.
HELP Commission. 2007. Beyond Assistance: The HELP Commission Report on Foreign Assistance Reform. Available at: http://helpcommission.go/. Accessed on February 23, 2008.
Hyman, G. 2008. Assessing Secretary of State Rice’s Reform of U.S. Foreign Assistance. Carnegie Papers. Washington, DC: Carnegie Endowment for International Peace.
Kalyvitis, S.C., and Vlachaki, I. 2007. Democracy Assistance and the Democratization of Recipients. Available at: http://ssrn.com/abstract=.
Lancaster, C. 2000. Transforming Foreign Aid: United States Assistance in the st Century.
Washington, DC: Peterson Institute for International Economics.
Lancaster, C. 2006. Foreign Aid: Diplomacy, Deelopment, Domestic Policies. Chicago: University of Chicago Press.
Landman, T. 2003. Map-Making and Analysis of the Main International Initiatives on Developing Indicators on Democracy and Good Governance. Final Report. University of Essex. Available at: http://www.oecd.org/dataoecd/0//0.pdf. Accessed on April 27, 2008.
National Endowment for Democracy. 2006. The Backlash Against Democracy Assistance. Washington, DC: National Endowment for Democracy.
NRC (National Research Council). 2006. The Fundamental Role of Science and Technology in International Deelopment: An Imperatie for the U.S. Agency for International Deelopment.
Washington, DC: The National Academies Press.
de Zeeuw, J., and Kumar, K. 2006. Promoting Democracy in Postconflict Societies. Boulder:
Terms in italics are defined elsewhere in the Glossary.
METHODOLOgICAL TERMSCase: A spatially delimited phenomenon observed at a single point in time or over some period of time—for example, a political or social group, institution, or event. By construction, a case lies at the same level of analysis as the principal inference. Thus, if an inference pertains to the behavior of nation-states, cases in that study will be comprised of nation-states. An individual case may also be broken down into one or more obserations, sometimes referred to as within-case observations.
Case study: The intensive study of a single case for the purpose of understanding a larger class of similar units (a population of cases). Note that while “case study” is singular—focusing on a single unit—a “case study research design” may refer to a study that includes several cases (e.g.,
comparative-historical analysis or the comparative method). Synonym:
Causal inference: Determining from data whether—minimally—a causal factor (X) is thought to raise the probability of an effect (Y) occurring.
Control: See Experiment.
Experiment: Generically, a research design in which the causal factor of interest (the treatment or interention) is manipulated by the researcher so IMPROVING DEMOCRACY ASSISTANCE as to produce a more tractable analysis. Within social sciences circles the term is often equated with a research design in which an additional attribute obtains: Cases are randomized across treatment and control groups.
External validity: See Validity.
Internal validity: See Validity.
N: See Obseration.
Observation: The most basic element of any empirical endeavor. Any piece of evidence enlisted to support a proposition. Conventionally, the number of observations in an analysis is referred to by the letter N. Confusingly, N is also used to refer to the number of cases.
Randomization: A process by which cases in a sample are chosen randomly (with respect to some subject of interest). An essential element for experiments that use control groups since the treatment and control groups, prior to treatment, must be similar in all respects that are relevant to the inference, and the easiest way to achieve this is through random selection. Sometimes, randomization occurs across matched pairs or within substrata of the sample (stratified random sampling), rather than across the entire population.
Research design: The way in which empirical evidence is brought to bear on a hypothesis.
Treatment: See Experiment.
Validity: Internal validity refers to the correctness of a hypothesis with respect to the sample (the cases actually studied by the researcher). External validity refers to the correctness of a hypothesis with respect to the population of an inference (cases not studied but that the inference is thought to explain). The key element of external validity thus rests on the representativeness of a sample—that is, its relative bias.
Variable: An attribute of an observation or a set of observations. In the analysis of causal relations, variables are understood either as independent (explanatory or exogenous), denoted X, or as dependent (endogenous), denoted Y.
Within-case analysis: See Case study.
X: See Variable.
Y: See Variable.
TyPES OF INTERvENTIONSNOTE: USAID does not have a standard terminology to describe the various levels of activities it undertakes.
Activity: An intervention of a single type (e.g., training judges).
Intervention: Any activity or set of activities (e.g., project, program) undertaken by a funder. Usually employed in the context of an ealuation;
here, the intervention is the independent variable whose effect on a policy outcome is being assessed.
Program: Includes all projects that address a particular USAID policy area, such as democracy and governance, health, or humanitarian assistance.
Project: Includes all activities within the scope of a particular contract or grant.
TyPES OF APPRAISALSCountry assessment: Appraisal of policy performance at the country level (e.g., levels of corruption or quality of democracy). Purposes of country assessments include tracking progress and regress across countries (including democratic and authoritarian transitions), identifying common patterns of transition and, possibly, the causal drivers of transition. This information should help funders decide in which countries investments might be most productive and also the sectors of a country that are most in need of assistance. Measured by meso- and macro-leel indicators.
Evaluation: See below.
Monitoring: Routine oversight of a project’s implementation (e.g., whether funds are spent properly and other terms of the contract are adhered to).
Usually measured with outputs (e.g., number of judges trained).
Strategic: Appraisal of the opportunities and constraints in various countries for transition to democracy or the stabilization or better functioning of democracy. Should be based on hypotheses about the factors that drive or inhibit democracy in specific contexts. Strategic appraisals guide USAID’s central decisions on how much democracy assistance to allot to specific countries in specific time periods. Country assessments, made by USAID DG missions, also involve a strategic appraisal.
Tactical: Appraisal of which programs should be employed, in which areas or sectors, to best assist a country’s transition to, or stabilization of, democracy. Tactical decisions are generally made at the level of the USAID mission DG office, following a country assessment. Good tactical decisions IMPROVING DEMOCRACY ASSISTANCE depend on accumulated knowledge about the impacts of specific DG programs in particular contexts, gained through good ealuations.
TyPES OF EvALUATIONSNOTE: Evaluations should be considered one type of appraisal.
Impact evaluation: A study of a project or set of projects that seeks to determine how observed outcomes differ from what most likely would have happened in the absence of the project(s) by using comparison or control groups or random assignment of assistance across groups or individuals to provide a reference against which to assess the observed outcomes for groups or individuals who received assistance. Randomized designs offer the most accuracy and credibility in determining program impacts and therefore should be the first choice, where feasible, for impact evaluation designs. However, such designs are not always feasible or appropriate, and a number of other designs also provide useful information, but with diminishing degrees of confidence, for determining the impact of many different kinds of assistance projects.
Output evaluation (generally equivalent to “project monitoring” within USAID): These evaluations consist of efforts to document the degree to which a program has achieved certain targets in its activities. Targets may include spending specific sums on various activities, giving financial support or training to a certain number of nongovernmental organizations (NGOs) or media outlets, training a certain number of judges or legislators, or carrying out activities involving a certain number of villagers or citizens. Output evaluations or monitoring are important for ensuring that activities are carried out as planned and that money is spent for the intended purposes.
Participatory evaluation: Individuals, groups, or communities that will receive assistance are involved in the development of project goals, and investigators interview or survey participants during and/or after a project was carried out to determine what their goals and expectations are for the project, how valuable the activity was to them, and whether they were satisfied with the project’s results.
Process evaluation: Focuses on how and why a program unfolded in a particular fashion, and if there were problems, on why things did not go as originally planned. Usually conducted after completion of a project, often using teams of experts who conduct interviews and examine project records. Currently the primary source of “lessons learned” and “best practices” intended to inform and assist project managers and implementers.