«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»
terns, background of key officials, location, ethnic composition, number and type of health facilities, and infection rates. The most important criteria to ensure comparability should be determined in consultations with experts. Grouped subcounties might be next to each other but immediate proximity is not necessary (or even desirable).13 In each subcounty, one CSO working in HIV/AIDS will be selected with the aim of finding similar CSOs across three subcounties in the group.
One subcounty in each group will be randomly assigned to receive a large CSO grant to monitor HIV/AIDS services in the subcounty. Another subcounty in the group will be randomly selected to receive a small CSO grant for HIV/AIDS. The remaining sub-county in the group will act as the pure control and receive no grant. This will be repeated for at least 50 groups, and preferably more.14 It is important to ensure that: (1) the large grant provides a significant increase to the existing budget of the CSOs, and that the small grants do not and (2) that the CSOs spend their grants entirely on HIV/AIDS activities within the selected subcounty and that there is not contamination (sharing of resources or expertise) across subcounties. It would probably work best to select CSOs that work only in a single subcounty to prevent the supplementing or siphoning off of funds to the treatment sites due to the grant. CSOs in both treatment and partial control groups should receive equivalent technical assistance and training on how to use the grant money and how to monitor and improve service delivery. USAID interactions with the CSOs in the treatment group, and partial control group should be equivalent throughout.
Evaluation. The primary question for evaluation purposes is: What are the effects of monetary grants on the organizational capacity of CSOs and on the ability of CSOs to monitor and improve government service delivery? The best possible evaluation for this type of project would be a large N randomized controlled field experiment. Because a large N study would require sizeable grants to at least 50 CSOs and additional monitoring and measurement, the costs are greater than that which is currently envisioned for CSO grants within the Linkages program. However, this design offers substantial benefits over a small N experiment and is of general interest to USAID.
13 Instead of grouping subcounties in sets of three, it might be more feasibly to use an alternative stratified sampling procedure whereby all the subcounties in the sample are stratified into types according to key factors and then subcounties within each stratum are randomly assigned into each of the three categories.
14 Depending on the districts chosen for Linkages, it may be possible to randomly select all the treatment and control subcounties from within the 10 districts.
0 APPENDIX E Measurement. Data should be collected before the grants are awarded, after the money is given (or at several points during the grant period), and two years after the end of the grant in order to assess both shortterm and medium-term effects of the monetary infusion. Equivalent data should be collected about CSOs and service delivery in the treatment, partial-control, and full-control subcounties. The ability of USAID to collect comparable data in the partial control group should be facilitated by the fact that the CSOs are receiving some funds from USAID. USAID may have to provide a small fee or incentive to the CSOs not receiving grants to enable the collection of similar intrusive and time-consuming data from the CSOs in the pure control group.
In order to study the effect of grants and increased resources on the organizational capacity of the CSOs, data should be collected on the budget, activities, operations, and planning of the CSOs. In addition, pre- and postintervention surveys can be conducted with CSO employees, volunteers, government officials and employees, and stakeholders to evaluate changes in the activities, effectiveness, and reputation of the CSOs.
In order to evaluate the effectiveness of grants’ government service delivery data can be collected on HIV/AIDS services and outcomes within each subcounty. Much of these data may already be collected by the government (such as the periodic National Service Delivery Survey conducted by the Uganda Bureau of Statustics (UBOS)—though perhaps USAID would need to fund an oversampling in treatment and control subcounties) or perhaps it can be collected in collaboration with other donor projects such as the President’s Emergy Plan for AIDS Relief.
Special attention should be given during the research design stage to determine the government activities that are likely to be affected by greater CSO involvement and how those activities might be accurately measured. Additional data collection could be done through surveys of service recipients or randomized checks on facilities and services. In addition, money-tracking studies of local government and government agencies could be conducted to evaluate the level of corruption in HIV/AIDS projects within the selected subcounties.
1. The grants could be given for an issue other than HIV/AIDS.
Selected issues must be ones where (a) the government plays a major role in providing services and (b) there are measurable outcomes of service delivery.
2. The intervention can be carried out at either the district level or the village level instead of at the middle subcounty level. At higher levels of local government, CSOs are denser and better organized. While the ability 0 APPENDIX E of CSOs to effect change in government may be greater at higher levels, the size of the grant needed to make a detectable difference will also be larger. Furthermore, it may be too difficult to find similar groups, and to protect units from contamination by other donors at higher levels of government.
3. If additional funds cannot be secured to conduct a large N randomized controlled experiment, a small N experiment could be conducted with the available funds, although with significantly less power to accurately evaluate the effects of CSO grants. In order to increase the number of possible comparisons, and to help control for the effect of context with a small number of treatment sites, a variation on the above design may be warranted. The inclusion of a second issue area may facilitate analysis in a small N context. For example, in each subcounty, one CSO working on education and one working on HIV/AIDS will be selected with the aim of finding similar CSOs across subcounty groups and issues. One subcounty will be randomly assigned to receive a large education grant and a small HIV/AIDS grant, and another subcounty will receive a large HIV/AIDS grant and a small education grant. Figure E-1 provides an illustration.
This research design affords several useful comparisons. Within a single subcounty, changes in the education CSO versus the HIV/AIDS CSO (one of which got a large grant and the other of which got a small grant) can be compared, and the degree of change in each sector can be evaluated. Within each subcounty group, the education CSOs (one with a large grant, one with a small grant, and one with no grant) can be compared and the changes in educational outcomes across the grouped subcounties can be compared. In addition, within each subcounty group, the two HIV/AIDS CSOs (one with a large grant, one with a small grant, and one with no grant) can be compared and the changes in HIV/AIDS outcomes across the grouped subcounties can be compared. The repetition of these comparisons across a number of different groups will help the researchers to parse out the effects of the grants from contextual factors.
Training and Assistance for a Random Selection of New Members of Parliament The Strengthening Democratic Linkages in Uganda program seeks to enhance the knowledge, expertise, and resources of members of parliament (MPs) so they can more effectively operate in a multiparty parliament, legislate and perform oversight functions, foster sustainable development, and engage constituents, civil society, and local governments.
The entire group of new MPs (approximately 150) will be randomly divided into two groups. USAID can explain that they only have enough resources to work with half the group at a time and that the fairest way to decide is by lottery. To ensure that the partisan makeup of the treated group is equivalent to the control group, USAID will probably want to stratify by party affiliation. They may also want to stratify by other key factors such as previous political experience, committee assignment, and gender and randomly assign MPs within strata to ensure that the treatment and control groups are equivalent along critical dimensions.
The treatment group will receive intensive personalized training and assistance from technical personnel. This assistance my take the form of group trainings on key issues, weekly or bi-monthly individual meetings with trained legal assistants, regular research assistance on topics chosen by the MP, secretarial services, and/or repeated meetings with CSO representatives. The control group will not receive these additional services (at least initially). It is important to ensure that the intervention (1) is deemed useful by the MPs so that they continue to participate fully in the program for its duration; (2) is significant enough that the effects, if there are any, can be measured; and (3) is limited to the MPs in the treatment group alone and not easily passed on to those in the control group. For example, if the treatment was the distribution of a newsletter each week 0 APPENDIX E to the treatment group, then it is very likely that many legislators in the control group would gain access to the newsletter and receive the same treatment as those in the treatment group.
Measurement. Jeremy Weinstein and Macartan Humphreys, in cooperation with the African Leadership Initiative, are currently producing annual scorecards for all of Uganda’s MPs recording their behavior in the parliament, in committee, and in their constituencies. These scorecards could be used to compare the behavior of MPs in the treatment and control groups. In addition, surveys could be conducted with MPs to measure the knowledge and reported behavior of new MPs and to assess perceptions of fellow MPs. Surveys could also be conducted with parliamentary staff, civil service leaders, key stakeholders, or constituents to assess the reputation and influence of different legislators. Perhaps other measures of MP involvement (such as visits to the library) can be collected. Eventually, for those who run for reelection, the vote results could be used to evaluate popularity.
Evaluation. For the purposes of evaluation, the most important question is: What are the effects of technical training and assistance on the ability of individual legislators to operate more actively, effectively, and independently in parliament?
1. To reduce costs of the intervention, a smaller number of MPs can be selected to be in the treatment group. The required number depends on the intensity of the intervention, the quality of the measures, and the heterogeneity of the group, but a treatment group of 50 MPs may be sufficient.
2. If it is not politically feasible to provide benefits to only some of the new MPs, then the treatment could be conducted in a rollout fashion. Half (or one-third) of the MPs would receive the treatment for the first several years, and the other group would receive the treatment in the later part of the term. The interventions with each group would have to be timed to fit with the collection of data for the scorecards.
3. Returning MPs could also be included in the experiment, although returning MPs are more experienced and thus less likely to be affected by additional assistance. Their inclusion also adds to the heterogeneity of the population. The intervention activities (and the associated costs) would have to be greater, and/or more widespread, in order to discern an effect.
0 APPENDIX E Reised Remuneration Policies to Fight Corruption The Strengthening Capacity to Fight Corruption in Uganda Program suggests that “the Government of Uganda will consider increased pay for key personnel, through the implementation of an enhanced remuneration package for anti-corruption investigators and prosecutors.” The revised remuneration policies would “enable performance (job evaluation) based salary structures for anti-corruption prosecutors, investigators, and other officers within GOU entities such as the DEI, DPP and the CID fraud squad.” The effects of changes in remuneration policies are of general interest to USAID. Although the implementation of the program cannot be manipulated to create contemporaneous control or comparison groups, the effects can still be evaluated effectively with a temporal comparison—before and after the intervention. The main consideration is to try to ensure that exogenous shocks do not take place during the period of measurement. For that reason we suggest that such an intervention could only be accurately evaluated if it took place some time before the other proposed reforms in the Request for Proposal for Strengthening Capacity to Fight Corruption in Uganda. Perhaps the changes in remuneration could be implemented immediately, while the other interventions are still in the planning stage.
Measurement. The main comparison is before the change in remuneration policies versus after the change. To evaluate the effect of changes in remuneration policies on recruitment and retention, the qualifications of the current employees will be assessed. In addition, the qualifications of all those who apply and former employees who sought alternative employment should also be assessed. To evaluate the effect of the remuneration policies changes on the effectiveness of anticorruption activities, the number of malpractices that are detected, effectively investigated, prosecuted, punished, and publicized before and after the changes can be compared.
Evaluation. The primary question from the perspective of evaluation is:
How do changes in remuneration policies affect recruitment and retention of qualified personnel and the performance of employees?
Possible alternatives. If time permits, it would be better to stagger the changes in remuneration policies by types of civil servants or grades. For example, prosecutors could receive the new remuneration packages several months before the investigators. Thus, if there is an external shock, it is less likely to similarly affect the outcomes of every subject of the study.
0 APPENDIX E
Current/Recently Ended Local Government (2004–end July 2007)—Urban Institute Rule of Law (2004–end July 2007)—Casals Political Parties and Civic Participation (2004–September 2007)—NDI/ IREX/Partners Albania Anti-Corruption/MCC Threshold (2006-2008)—Chemonics Peru (June 2007) Current Pro Decentralization (PRODES)—ARD, Inc.
Political parties/Elections—NDI/Transparencia Congress Program—United Nations Developoment Program and George Washington University LAPOP Survey “Democracy Political Culture in Peru, 2006”—Vanderbilt University Not Included in Field Visit Conflict Mitigation in Mining—CARE Human Rights National Coordinator Institutional Development and Therapy Attention to Victims of Torture and Political Violence— Human Rights National Coordinator and Center for Psycho-Social Attention Trafficking in Persons—Capital Humano y Social Alternativo
Recent/Soon to End Decentralization (to end December 2007)—ARD Not Included in Field Visit Community Resilience and Dialogue (September 2002–September 2007)—International Rescue Committee
Albania Team Members: David Black, USAID; Rita Guenther, National Academies; Jo Husbands, National Academies; Karen Otto, consultant; Daniel Posner, consultant.
Karen Otto, a former USAID direct hire, is a monitoring and evaluation specialist/consultant with a strong background in democracy and governance (especially rule of law). She has developed 70 performance monitoring plans for proposals and ongoing development projects in a wide array of areas, particularly DG. She has evaluated the performance of many development projects and the operations of all federal courts in the United States, and has developed a formal evaluation system for the Administrative Office of the U.S. Courts to review courts under its jurisdiction. Ms. Otto has been a court administrator in federal, state, and municipal courts in the United States. She has been a rule of law advisor in USAID and a project manager for DG projects overseas. She
has personal experience in many of the areas involved in DG activities:
court administration (she was a court administrator), media (she was a journalist), judicial disciplinary system (she was an inspector in a judicial inspection service), etc.
Daniel Posner, associate professor of political science at the University of California, Los Angeles, conducts research in the following four broad areas: ethnic politics, ethnicity and economic development, political change in Africa, and social capital and civil society. His research in this area is motivated by a number of questions: When and why do some ethnic identities (and ethnic cleavages) matter for politics, and when do they not? Why, when people think about who they are, do they see themselves (and others) as members of particular ethnic groups, and why do the groups that they see themselves as part of have the sizes and physical locations that they do? How can we reconcile what we know about the fluidity and context dependence of ethnic identities and ethnic cleavages with the need to measure social diversity and code individuals by their APPENDIX E group affiliations? Why does ethnicity matter for collective action? How well are people able to identify the ethnic backgrounds of others? He approaches each of these questions with a combination of theory and the collection of original data (including experimental data).
Peru Team Members: Moises Arce, consultant; Tabitha Benney, National Academies; David Black, USAID; Thad Dunning, consultant; Rita Guenther, National Academies.
Moises Arce is an associate professor in the Department of Political Science at the University of Missouri. His research focuses on the politics of market reform, comparative political economy, and Latin American politics (Peru). He received funding from the National Science Foundation, the Social Science Research Council, and the Fulbright Scholar Program.
His publications include the book Market Reform in Society: Post-Crisis Politics and Economic Change in Authoritarian Peru, and articles in the Journal of Politics, Comparatie Politics, Comparatie Political Studies, and the Latin American Research Reiew. He previously taught at Louisiana State University. He received his Ph.D. in 2000 from the University of New Mexico.
Thad Dunning is assistant professor of political science and a research fellow at the Whitney and Betty MacMillan Center for International and Area Studies at Yale. His current research focuses on the influence of natural resource wealth on political regimes; other recent articles investigate the influence of foreign aid on democratization and the role of information technology in economic development. He conducts field research in Latin America and has also written on a range of methodological topics, including econometric corrections for selection effects and the use of natural experiments in the social sciences. Dunning’s previous work has appeared in International Organization, the Journal of Conflict Resolution, Studies in Comparatie International Deelopment, Geopolitics and in a forthcoming Handbook of Methodology (Sage Publications). In 2006-2007, he was teaching an undergraduate lecture course and a seminar on ethnic politics and a graduate seminar on formal models of comparative politics. He received a Ph.D. in political science and an M.A. in economics from the University of California, Berkeley.
APPENDIX E Uganda Team Members: Mark Billera, USAID; Mame-Fatou Diagne, consultant;
John Gerring, committee member; Jo Husbands, National Academies;
Devra Cohen Moelher, consultant.
Mame-Fatou Diagne is a Ph.D. candidate in economics at the University of California, Berkeley. A native of Senegal, she graduated from the Institut d’Etudes Politiques de Paris and received a Master of International Affairs from Columbia University. She has worked as an emerging markets economist for Societe Generale in Paris and for Standard and Poor’s in London, where she was the principal analyst for South Africa and other African-rated sovereigns. Her current areas of research are development, public and labor economics, and particularly, the economics of education and political economy in Africa.
Devra Cohen Moehler is an assistant professor of political science at Cornell University. She recently returned to Cornell from two years as a Harvard Academy Scholar at the Harvard Academy for International and Area Studies. Her research interests include political communications, education and democratization, consequences of political participation, political behavior, comparative constitution-making, law and development, cross-national survey research, and the international refugee regime.
Her dissertation, based on research conducted in Uganda, focused on the effects of citizen participation in Ugandan constitution making in creating “distrusting democrats.” She received her Ph.D. in political science from the University of Michigan and a B.A. in development studies from the University of California, Berkeley.
Voices from the Field: Model Questionnaire
Good day. As you know, my name is _______. As part of ongoing attempts on the part of DCHA [the Bureau of Democracy, Conflict, and Humanitarian Assistance] to better understand the effect of our democracy promotion activities in countries around the world, we are conducting a series of surveys with DG advisors and activity managers. You have been selected to participate in this survey because of your extensive knowledge and experience. We will spend approximately 90 minutes with you asking a series of questions about your experiences. I will take handwritten notes of your responses. Please feel free to ask me clarifying questions as we progress. At the end of the interview, there will be an opportunity for you to address any subjects or issues that we may have missed or given less emphasis than they deserve. Please be assured that you can talk with candor; your responses will remain anonymous. We do intend to aggregate the responses of all our interlocutors for the purposes of reporting and improving DCHA recommended approaches in the future and we may use quotes from our interviews, stripped of identifying information.
However, any specific references to what you tell us will only be used with your consent.
Do you have any questions before we begin?
Let’s begin by talking about your work with USAID
1. In total, how long have you worked for USAID?
2. How much of your time with USAID has been spent working in the Democracy and Governance Sector?
3. In how many countries and for how long have you worked in the DG Sector with USAID? Please list for me the name of the country and how long you have worked in each country.
4. Which DG subsectors have you worked in for USAID? Please list the name of the country and the subsector(s) in which you worked in that country. [Interiewer: Write the name of the country and place an X in the box below the subsector(s) for that country. (Subsectors: Ciil Society, Rule of Law, Legislatie Strengthening, Electoral Processes, Anti-Corruption, Media, Human Rights, Other)] Now let’s talk about some of the specific USAID Dg programs that you have worked on. First, we are interested in how you think about program success.
1. Considering all of the DG programs that you have worked on, supervised, or directly observed, can you tell me which one or two you think were the most successful?
2. [Interiewer: If one program was identified aboe, skip this question and go directly to question, #. If two programs are identified aboe, ask:] In your view, which of these two was the most successful?
3. Let’s get a little more information about this program. In which country was it carried out?
4. During which years did the program operate?
5. During this time, when were you involved with the program?
6. What was the approximate funding level? Please indicate the life of project funding and the annual funding.
7. What were the objectives of the program?
8. Can you please describe the basic operation of the program? How did it work?
9. And why do you say that this program was the most successful?
What did it accomplish?
10. Can you give me a few examples of success?
11. Can you identify the particular factors that seem to have led to the success of this program and why each factor that you identify was important? [Interiewer: Be sure to prompt informant to answer why each factor is important.] Factor 1 & why important?
Factor 2 & why important?
APPENDIX F Factor 3 & why important?
Factor 4 & why important?
Factor 5 & why important?
12. Considering all of the factors that you have just told me about, can you identify which one or two are the most important contributors to the program’s success?
Most important factor 2nd most important factor We have developed a list of factors that have often been associated with program success and failure. Some of them are mirrored in the factors you have identified; a few others have not yet been mentioned. We would like you to describe for us how, if at all, these particular variables seem to be related to the success of the program.
13. Sometimes, program success can be influenced by country-specific enabling factors, things like the general level of economic development, cultural and social conditions, or historic precedent. On a scale of 1 to 5 with 5 representing the highest level of importance, how would you rank the importance of these factors in determining the success of the program? [Interiewer: Circle a single number]
14. Were any attributes in this cluster of factors particularly important, and if so, why?
Attribute 1 & why important?
Attribute 2 & why important?
15. Now let’s look at the country more specifically in terms of democratic development. Sometimes political factors like level of commitment to reform, institutional capacity, level of corruption, level of press freedom, degree of political competition, capacity and activity of civil society, and other factors can influence the success of DG programs. On a scale of 1 to 5 with 5 representing the highest level of importance, how would you rank the importance of these factors in determining the success of the program? [Interiewer: Circle a single number]
16. Were any attributes in this cluster of factors particularly important, and if so, why?
Attribute 1 & why important?
Attribute 2 & why important?
17. Foreign policy priorities of the USG can sometimes have an important influence on program success. U.S. priorities in the country, the role of the Embassy, and other USG actors (DEA, DOD, CDC, MCC etc.) can affect the success of DG programs. On a scale of 1 to 5 with 5 representing the highest level of importance, how would you rank APPENDIX F the importance of these factors in determining the success of the program? [Interiewer: Circle a single number]
18. Were any attributes in this cluster of factors particularly important, and if so, why?
Attribute 1 & why important?
Attribute 2 & why important?
19. International factors often play a role in determining program success. The political conditions in the region, international political orientation and diplomatic considerations of the country, and the interests and activities of other donors might play varying roles. On a scale of 1 to 5 with 5 representing the highest level of importance, how would you rank the importance of these factors in determining the success of the program? [Interiewer: Circle a single number]
20. Were any attributes in this cluster of factors particularly important, and if so why?
Attribute 1 & why important?
Attribute 2 & why important?
21. Program-specific factors are also often important in determining success. Things like levels of funding for the program, length or sequencing of the program, implementation mechanism, quality of project design, quality or experience of the implementing partners’ (contractors/grantees) staff or home office support; quality of the implementing partners’ program management; quality of host country partners, willingness to take risks, etc., can all influence success.
On a scale of 1 to 5 with 5 representing the highest level of importance, how would you rank the importance of these factors in determining the success of the program? [Interiewer: Circle a single number]
22. Were any attributes in this cluster of factors particularly important, and if so, why?
Attribute 1 & why important?
Attribute 2 & why important?
23. The USAID mission itself is often a factor associated with program success. For example, the priority given the DG sector, experience and staffing level of DG staff, programmatic relationships between DG and other mission sectors, the quality of mission management and leadership, and the impact of previous USAID activities can all be important. On a scale of 1 to 5 with 5 representing the highest level of importance, how would you rank the importance of these factors in determining the success of the program? [Interiewer: Circle a single number] Attribute 1 & why important?
Attribute 2 & why important?
24. Now, let’s look back briefly at the question where you identified a number of factors that you thought were determinants of the program’s success. You mentioned [Interiewer: Turn back to Question and read a summary of each of the factors identified by respondent]. Considering the factors that you mentioned and the factors that we have just discussed, would you like to make any additions or changes in the level of importance? Recall that the factors we have just discussed are: (1) Countryspecific enabling environment, (2) Democratic/political, (3) Foreign policy/other donors, (4) International, (5) Program-specific, (6) USAID mission.
Most important factor 2nd most important factor 3rd most important factor 4th most important factor 5th most important factor [Interiewer: If a second program was identified as successful, repeat the sequence of questions. If only one program was identified, go directly to the next series of questions.]
25. We have talked quite a bit about successful DG programs and it is nice to find out what works. Let’s take a few minutes to consider the other side of the coin. Can you tell me about one or two of the biggest “turkeys”? As you reflect on your experience, what is the worst program that you ever worked with?
26. During which years did this program operate?
27. During this time, when were you involved with the program?
28. What was the approximate funding level? Please indicate the life of project funding and the annual funding.
29. What were the objectives of the program?
30. Can you please describe the basic operation of the program? How did it work?
31. And why do you say that this program not successful? Why was it a “turkey”?
32. If we think about the universe of factors we have discussed, can you identify which if any of the following factors contributed to the poor outcomes in this case and why? Recall that the factors we have discussed are: (1) Country-specific enabling environment, (2) Democratic/political, (3) Foreign policy/other donors, (4) International, (5) Program-specific, (6) USAID mission Factor 1 & why important?
Factor 2 & why important?
0 APPENDIX F Factor 3 & why important?
Factor 4 & why important?
Factor 5 & why important?
We are nearly done here! Thinking about program success and failure, everything else being equal, are there any type of programs or Dg activities (rule of law, civil society, elections, parties and legislatures, anticorruption, decentralization, etc.) that you think are more likely or less likely to succeed than others? If so, which ones and why?
Sector 1 & why more or less likely successful?
Sector 2 & why more or less likely successful?
We would like to ask you to give us a few general observations and recommendations on the basis of your overall experience. First, what guidance would you give to a Dg officer thinking about issues related to program sequencing and an appropriate or rational mix of programs in a Dg portfolio?
1. Do you have any observations about the general characteristics of successful DG programs?
2. Do you have any additional comments or final observations?