«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
The importance of creating consensus on these matters can hardly be overemphasized. The purpose of a democracy index is not simply to guide policymakers and policymaking bodies such as USAID, the World Bank, and the International Monetary Fund. Nor could it be so constrained, even if it were desirable. As soon as an index becomes established and begins to influence international policymakers, it also becomes fodder for dispute in other countries around the world. A useful index is one that gains the widest legitimacy. A poor index is one that is perceived as a tool of Western influence or a masque for the forces of globalization (as Freedom House is sometimes regarded). Indeed, because current democracy scales are produced by proprietary scalings and aggregations by specific organizations rather than by objective measurements, those organizations are often subjected to “lobbying” by countries that wish to shift their scores. The hope is that by disaggregating the components of democracy down to levels that are more operational and less debatable, it might be possible to garner a broader consensus around this vexed subject. Countries would know, more precisely, why they received the scores they did. They would also know, more precisely, what areas remained for improvement.
A second advantage is the degree of precision and differentiation that 0 IMPROVING DEMOCRACY ASSISTANCE a disaggregated index offers relative to the old-fashioned “Big D” concept of democracy. Using the committee’s proposed index, a single country’s progress and/or regress could be charted through time, allowing for subtle comparisons that escape the purview of highly aggregated measures such as Freedom House and Polity. One would be able to specify which facets of a polity have improved and which have remained stagnant or declined. This means that the longstanding question of regime transitions would be amenable to empirical tests. When a country transitions from autocracy to democracy (or vice versa), which elements come first?
Are there common patterns, a finite set of sequences, prerequisites? Or is every transition in some sense, unique?
Similarly, a disaggregated index would allow policymakers to clarify how, specifically, one country’s democratic features differ from others in the region or across regions. While Big D democracy floats hazily over the surface of politics, the dimensions of a disaggregated index are comparatively specific and precise. Contrasts and comparisons may become correspondingly more acute.
Applying the Proposed Index to Democracy Assistance Programming It is important to remember that, although the committee’s general goal is to provide a path to democracy measures that will be useful to policymakers and citizens alike, the specific charge is to assist USAID. This means the index must be useful for particular policy purposes. Consider the problem of assessment. How can policymakers in Washington and in the field missions determine which aspects of a polity are most deficient and therefore in need of assistance? While Freedom House and Polity offer only one or several dimensions of analysis (and these are highly correlated and difficult to distinguish conceptually), the committee’s proposed index would begin with 13 such parameters. It seems clear that for assessing the potential impact of programs focused on different elements of a polity (e.g., rule of law, civil society, governance, and elections—the four subunits of the DG office at USAID), it is helpful to have indicators that offer a differentiated view of the subject.
These same features of the proposed index are equally advantageous for causal analysis, which depends on the identification of precise mechanisms, intermediate factors that are often ignored by macro-level crossnational studies. Which aspects of democracy foster (or impede) economic growth? What aspect of democracy is most affected by specific democracy promotion efforts? Whether democracy is looked on as an independent (causal) variable or as a dependent (outcome) variable, we need to know which aspect of this complex construct is at play.
Policymakers also wish to know what effect their policy interventions
MEASURING DEMOCRACYmight have on a given country’s quality of democracy (or on a whole set of countries, considered as a sample). There is little hope of answering this question in a definitive fashion if democracy is understood only at a highly aggregated level. The interventions by democracy donors are generally too small relative to the outcome to draw plausible causal inferences between USAID policies, on the one hand, and country A’s level of democracy (as measured by Freedom House or Polity) on the other. However, it is plausible—though admittedly still quite difficult—to estimate the causal effects of a project focused on a particular element of democracy if that element can be measured separately. Thus, USAID’s election-centered projects might be judged against several specific indicators that measure the characteristics of elections. This is plausible and perhaps quite informative (though, to be sure, many factors other than USAID have an effect on the quality of elections in a country). The bottom line is this: If policymakers cannot avoid reference to country-level outcome indicators, they will be much better served if these indicators are available at a disaggregated meso level.
All of these features should enhance the utility of a disaggregated index for policymakers. Indeed, the need for a differentiated picture of democracy around the world is at least as important for policymakers as it might be for academics. Both are engaged in a common enterprise, an enterprise that has thus far been impeded by the lack of a sufficiently discriminating measurement instrument.
Consider briefly the problem that would arise for macroeconomists, finance ministers, and members of the World Bank and International Monetary Fund if they possessed only one highly aggregated indicator of economic performance. As good as GDP is (and there are, of course, considerable difficulties), it would not go very far without the existence of additional variables that measure the components of this macro-level concept. There is a similar situation in the field of political analysis. We have a crude sense of whether countries are democratic, undemocratic, or in between (e.g., “partly free” or partially democratic), but we have no systematic knowledge of how a country should be scored on the various components of democracy.
Since a disaggregated index can be aggregated in a variety of ways, developing a disaggregated index is advantageous even if a single aggregated measure is sometimes desired for policy purposes. Indeed, it is expected that scholars and policymakers will compose summary scores from the underlying data provided by this index. However, the benefit of beginning with the same underlying data (along each of the identified dimensions) is that the process of aggregation is rendered transparent.
Any composite index based on these data would be forced to reveal how the summary score for a particular country in a particular year was deterIMPROVING DEMOCRACY ASSISTANCE mined. Any critic of the proposed score, or of the summary index at large, would be able to contest the aggregation rules used by the author. The methodology is “open source” and thus subject to revision and critique.
Further, any causal or descriptive arguments reached on the basis of a summary indicator could be replicated with different aggregation rules.
If the results were not robust, it might be concluded that such conclusions were contingent on a particular way of putting together the components of democracy. In short, both policy and scholarly discourse might be much improved by a disaggregated index, een if the ultimate objective involves the composition of a highly aggregated index of Big D democracy.
Funding and Management Readers of this document might wonder why, if the potential benefits of a disaggregated democracy index are so great, one has not yet been developed. There are two simple answers to this question. First, producing such an index would be a time-consuming and expensive proposition, requiring the participation of many researchers. It would not be easy.
Second, although the downstream benefits are great, no single scholar or group of scholars has the resources or the incentives to produce such an index.15 (Academic disciplines do not generally reward members who labor for years to develop new data resources.) Consequently, academics have continued to use—and complain about—Polity, Freedom House, ACLP, and other highly aggregated indices. Policymakers will have to step into this leadership vacuum if they expect the problem of faulty indicators to be solved.
Precedents for such support can be found in other social science fields. USAID served as a principal funder for demographic and health surveys that vastly enhanced knowledge of public health throughout the developing world.16 The State Department and the Central Intelligence Agency served as principal funders of the Correlates of War data collection project.17 On a much smaller scale, the State Department provides ongoing support for the Polity project.
To be sure, the entire range of indicators proposed here is probably larger than any single funder is willing or able to undertake. It is highly advisable that several funders share responsibility for the project so that 15 Note that while scholars who are discontented with the leading indicators of democracy periodically recode countries of special concern to them (e.g., McHenry 2000, Berg-Schlosser 2004a, b; Acuna-Alfaro 2005; Bowman et al 2005), this recoding is generally limited to a small set of countries and/or a small period of time.
16 Surveys and findings are described on the USAID Web site: http://www.measuredhs.
17 Information about the project may be found at http://www.correlatesofwar.org/.
MEASURING DEMOCRACYits financial base is secure into the future and so that the project is not wholly indebted to a single funder, a situation that might raise questions about project independence. Preferably, some of these funders would be non-American (e.g., Canadian, European, Japanese, European Union, or international organizations like the World Bank or the United Nations Development Program). Private foundations (e.g., Open Society Institute, Google Foundation) might also be tapped. The committee conceptualizes this project as a union of many forces. This makes project management inevitably more complicated. However, the sorts of difficulties encountered here, insofar as they constitute a deliberative process about the substantive issues at stake, may enhance the value of the resulting product.
Certainly, it will enhance its legitimacy.
Another possibility is that different funders might undertake to develop (or take responsibility for) different dimensions of the index, thus apportioning responsibility. It is preferable, in any case, that some level of supervision be maintained at the top so that the efforts are well coordinated. Coordination involves not only logistical issues (sharing experts in the field, software, and so forth) but also, more importantly, the development of indicators that are mutually exclusive (nonoverlapping) so that the original purpose of the project—disaggregation—is maintained. Note that several of the above-listed components might be employed across several dimensions, requiring coordination on the definition and collection of that variable.
As a management structure, the committee proposes an advisory group to be headed by academics—with some remuneration, depending on the time requirements, and suitable administrative support—in partnership with the policy community.18 This partnership is crucial, for any widely used democracy assessment tool should have both a high degree of academic credibility and legitimacy among policymakers. Major shortcomings of previous efforts to develop indices of democracy and governance resulted from insufficient input from methodologists and subject specialists or lack of broad collaboration across different stakeholders.