WWW.BOOK.DISLIB.INFO
FREE ELECTRONIC LIBRARY - Books, dissertations, abstract
 
<< HOME
CONTACTS



Pages:     | 1 |   ...   | 45 | 46 || 48 | 49 |   ...   | 62 |

«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»

-- [ Page 47 ] --

To undertake impact evaluations, RFPs and RFAs would need to contain explicit language indicating that on this occasion such an evaluation is expected. The solicitation would not need to specify the evaluation design in detail; the committee and the field teams were told that implementers would readily understand the implications of language that 1A key distinction among the types of agreements is the amount of control that USAID has over how the award is implemented. USAID has the most control over contracts, less with cooperative agreements, and the least with grants, which give implementers wide discretion over how to carry out projects, including M&E.

0 IMPROVING DEMOCRACY ASSISTANCE called for sound impact evaluation as requiring the collection of baseline data, treatment and control groups where possible, and alternatives when the project involved an “N = 1” intervention. But the process would need to begin at this stage.

If a more detailed statement is considered preferable, a recent RFP in one of the missions that the field teams visited provides an example.2 As part of the performance monitoring plan called for in the RFP for the Democratic Linkages project in Uganda, bidders were told they should have “a clearly developed strategy for assessing the impact of the program at all three levels [national, district, and subcounty] by evaluating outcomes over time (comparing pre-intervention and post-intervention values on impact variables) or by comparing outcomes in districts selected to receive the program and those that do not (matched to ensure their comparability” (USAID/Uganda 2007:27).

Points for Impact Ealuations Once USAID receives proposals, the bids must be evaluated. Another part of the competitive process is awarding points, which are specified in the RFP or RFA, to various parts of a proposal. One of the impediments to encouraging investment in evaluations is that relatively few points are assigned to the M&E plan and often the M&E plan is included as a subset of some other category rather than being graded on its own. The committee did not undertake an extensive examination of this issue, but meetings with DG officers and implementers and the field visits suggest that it would be rare for an M&E plan to count for much more than 10 out of 100 possible points for the overall proposal. By contrast, the experience and quality of the implementer’s chief of party might earn 30 to 40 points because management ability is considered so critical to project success.

The committee is not recommending a specific number of points for evaluation, but it does seem likely that some change would be needed to give a more rigorous evaluation plan a competitive advantage. Instead of changing the number of points, another approach would be to treat the M&E plan as a separate category, so that a high score might be a tipping point or a genuine competitive advantage. The DG office could consult with other areas in USAID, such as health or agriculture, where impact evaluations may be more common practice, for guidance on how to structure the points or process used in evaluating proposals.

2Again, as far as the committee was able to determine, these requirements were exceptions

–  –  –

Time Pressures One of the most precious commodities once an award is made is time.

As noted above, once an award is made, there is often great pressure to “move the money” as soon as it becomes available, to “hit the ground running” and “show early success.” In principle, implementers generally have 30 to 60 days after an award is signed to develop an M&E plan for approval by the mission, which usually includes collection of some kind of baseline information or data prior to, or very soon after, the project (assistance) activities begin. Yet in practice, two things often happen: (1) time pressures mean that project activities actually begin before all the work to set up and implement the monitoring plan and baseline measurements can be accomplished or (2) the process of approving the monitoring plan can drag on, sometimes for months, so that projects fall behind schedule and plans to collect baseline

measures are delayed or dropped. The effect is the same in both cases:

Crucial baseline data are not collected and may not be able to be reconstructed later in the project. The opportunity for a rigorous assessment of project impact may be effectively lost.

For those select projects for which DG officers want sound impact evaluations, contracting schedules for implementers need to allow for the implementation of an appropriate evaluation design, including establishing an appropriate control or comparison group and setting up and completing baseline measurements on both the assistance and the control groups.3 Policymakers may need to be reminded that rushing to roll out projects without allowing for careful examination of initial conditions and creation of comparison groups undermines the only way to accumulate knowledge on whether those DG projects are working as intended and those expenditures are worthwhile.

Keeping Project Ealuation Independent Ideally, the individuals or contractors who implement a project should not be the only ones involved in evaluating its outcomes. After all, they have every incentive to show success. Independent evaluations by a separate contractor that show project success are therefore much more convincing.





3 Where the comparison group is part of a population already being surveyed and the baseline data can be obtained from the survey, the need to establish relationships with the comparison group is obviated. But for activities involving smaller and identifiable control groups—such as sets of legislators or judges or NGOs or specific villages that will not receive assistance in the initial phase of the program—time to establish such relationships to allow proper data collection is essential to any sound impact evaluation.

0 IMPROVING DEMOCRACY ASSISTANCE USAID has already recognized this principle in its practices for process evaluations by requiring that they be carried out by agents other than the program implementers. Yet this is easier for process evaluations, which can be undertaken after a project has begun or been completed, than for impact evaluations, which generally require that plans for data gathering and analysis be “built in” to the project in the design stage.

Once an award is given, USAID could then give separate contracts, or independent tasks within the same contract, to implementers A and B, the former to carry out the program and the latter to carry out the evaluation portion. This would leave the evaluation partner, who is receiving separate payment and rating from USAID on the quality of its evaluation, with incentives to provide the highest-quality evaluations for USAID. To minimize the risk of collusion, USAID may have to require contractors who implement a large number of projects for USAID DG offices to work with several different evaluation partners; similarly, evaluation contractors should be required to partner with several different implementers over time in order to ensure continued independence of project and evaluation agents.

Resource Issues One of the major objections to impact evaluations that the committee and its field teams encountered is that they “cost too much.” The collection of high-quality baseline data and indicators, especially since it must be done for both those who receive the DG support and a control group that does not, can be costly, although Chapters 6 and 7 discuss ways in which at least some of those costs could be reduced. But unfortunately there is no way to analyze that objection relative to current M&E spending because USAID is not able to provide reliable estimates of those costs.

This is true both for USAID Washington and for the three missions visited by the committee’s field teams.

There are several reasons that USAID cannot provide an estimate of its M&E expenditures. One reason is that there is no consistent methodology for budgeting project evaluations, so that both missions and implementers may count the same things in different ways. Perhaps more important, as already discussed there are many kinds of M&E, and the costs of some are much easier to estimate than others. The list below was developed with the assistance of USAID/Washington staff and the work of the three field teams.

–  –  –

but these differ in the level of detail, and the cost of preparing them would be difficult to measure. Sometimes a proposal includes an estimate of costs directly related to M&E (e.g., if the implementer anticipates doing an opinion survey), but this does not always happen and is not a requirement. It is uncertain whether a project’s M&E budget would include the time that staff members spend collecting data on indicators and preparing required reports. In some cases, local staff will collect the information, which is then sent to the implementer’s headquarters for analysis and preparation of the required reports for USAID. In this case the costs would more likely be considered part of the project’s overhead than part of the M&E costs. So project budgets might show a zero (even with a good M&E plan) or might show tens of thousands of dollars if, for example, annual opinion surveys are planned.

• Mission Performance Management Plan (PMP). Required of each mission as part of meeting Government Performance and Results Act (GPRA) requirements, these set out “strategic objectives” and “intermediate results” with corresponding results indicators. Many missions will spend money to have consultants train mission staff in developing PMPs and/or help develop them. Missions might also spend money to collect some data for them. But in many cases they rely on data collected by partners or from third-party sources (e.g., the host government, local NGOs) and rely on mission staff to develop the plans and compile data and thus would not have a budget line item dedicated to PMPs.

• USAID annual report and common indicators. Missions were required to answer certain common questions each year for the annual report (in addition to the PMPs). Starting in FY2007, this was replaced by the common indicators for USAID and the State Department developed as part of the foreign assistance “F Process” reforms. These costs are unlikely to be included in mission budgets.

• Self-ealuations by implementers. Some grants and contracts include plans for the implementer to conduct its own evaluation, at the midway point and/or the end of the project. Typically these will include budgets for $10,000 to $20,000 to bring in people (e.g., from the home office) to do the evaluation. These may or may not include a budget to collect baseline and subsequent data.

• Outside ealuation of grants/contracts. These are typically requested and paid for by a mission, often when it is thought a project is not performing well or a major project is close to completion and an evaluation is part of planning a follow-on project. Again, this type of evaluation almost always consists of a team of two to four consultants who spend two to three weeks in-country and base their findings largely on interviews with a range of people (mission staff, partner staff, direct and indirect beneficiaries, local experts, and so forth). This type of evaluation costs between 0 IMPROVING DEMOCRACY ASSISTANCE $40,000 and $100,000, depending on the number of consultants and the amount of time spent in the country. A mission might undertake zero to three evaluations of DG projects per year, depending on a number of factors (e.g., the number of activities in the DG portfolio, whether a new strategy is due, if a major event occurs in the country, new mission staff arrive).

• Strategic objecties final ealuations. Missions are required to conduct a final evaluation whenever they close out activities in one of their strategic objectives. These are conducted in much the same way that outside evaluations of grants/contracts are conducted, but with more emphasis on overall impact on a sector rather than exclusively focusing on the performance of the implementers. The cost would be about the same as the outside evaluations and depend on similar factors.



Pages:     | 1 |   ...   | 45 | 46 || 48 | 49 |   ...   | 62 |


Similar works:

«Hypertone Kochsalzlösung bei erhöhtem intrakraniellen Druck aufgrund einer schweren akuten zerebrovaskulären Erkrankung Untersuchung zu Sicherheit und Effekt aus der Medizinischen Fakultät / dem Fachbereich Neurologie der Friedrich-Alexander-Universität Erlangen-Nürnberg zur Erlangung des Doktorgrades Dr. med. vorgelegt von David Rianto Stark aus Hannover Als Dissertation genehmigt von der Medizinischen Fakultät der Friedrich-Alexander-Universität Erlangen-Nürnberg Tag der mündlichen...»

«Curriculum Vitae Prof. Dr. Thorsten Gehrke Medical Director HELIOS ENDO-Klinik Hamburg Holstenstrasse 2, 22767 Hamburg, Germany Age: 53 years Study of Medicine: 1981 – 1987 University of Hamburg, Germany Institute for Anatomy: 1988 – 1989 University of Kiel, Germany General Surgery: 1989 – 1991 Geesthacht, Germany Orthopedic Surgery 1991 – 1993 Orthopedic Department, University of Kiel, Germany Endoprosthetic Surg. 1993 – 1995 ENDO-Klinik, Hamburg, Germany Orthopedic Surgery: 1995 –...»

«Employment Conditions and Health Inequalities Final Report to the WHO Commission on Social Determinants of Health (CSDH) Employment Conditions Knowledge Network (EMCONET) Joan Benach, Carles Muntaner, Vilma Santana (Chairs) Health Inequalities Research Group Occupational Health Research Unit Dept. Experimental Sciences and Health Universitat Pompeu Fabra, Barcelona, Catalonia, Spain Social Equity and Health Section Centre for Addiction and Mental Health (CAMH) University of Toronto, Ontario,...»

«TECHNISCHE UNIVERSITÄT MÜNCHEN Klinik für Psychosomatische Medizin und Psychotherapie Klinikum rechts der Isar Adhärenz der Psychotherapeuten im Rahmen einer neu entwickelten, manualisierten Kurztherapie: Zusammenhang mit Patientenmerkmalen und Therapieergebnissen Marianne B. F. Haack Vollständiger Abdruck der von der Fakultät für Medizin der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Medizin genehmigten Dissertation. Vorsitzender:...»

«Aus der Anästhesiologischen Klinik der Friedrich-Alexander-Universität Erlangen-Nürnberg Direktor: Prof. Dr. med. Dr. h. c. J. Schüttler Vergleich des Verlaufs der neuromuskulären Blockade nach Applikation von Mivacurium bei Schulkindern unterschiedlicher Altersgruppen im Rahmen einer Total intravenösen Anästhesie Inaugural-Dissertation zur Erlangung der Doktorwürde der Medizinischen Fakultät der Friedrich-Alexander-Universität Erlangen-Nürnberg vorgelegt von Sebastian Krinner aus...»

«U S A T F Sports Medicine Resources U S A T F Excess Accident Medical Coverage. This insurance coverage is free to all U S A T F Members. Covers acute injuries that occur at competitions and regular practice. Coverage o f up to $10,000 with a $200 deductible and 20% copay on the first $5,000 ofexpenses. First medical expense for a covered injur)' must occur within 30 days o f the injur)' and a claim must be filed within 90 days o f the injur'. Does not cover chronic injuries, only acute...»

«DUKE UNIVERSITY HEALTH SYSTEM Human Research Protection Program DETERMINATION OF RISK FOR RESEARCH STUDIES INVOLVING CHILDREN 10/13/2014 DHHS regulations limit research involving children to those activities that meet one of four categories of research. These categories (specified in 45 CFR 46 Subpart D) are based on the level of risk and potential for benefit to the individual participant. Equivalent categories appear in the FDA regulations at 21 CFR 50 Subpart D, and are to be applied to all...»

«NEUERSCHEINUNG Sandra Wiesinger-Stock/Erika Weinzierl/Konstantin Kaiser (Hg.) Vom Weggehen. Zum Exil von Kunst und Wissenschaft Exilforschung heute Buchreihe der Österreichischen Gesellschaft für Exilforschung (öge), Bd. 1 496 Seiten, mit zahlr. Abb., 29,80 Wien: Mandelbaum Verlag 2006 Mehr: http://www.exilforschung.ac.at (Willkommensbonus) http://www.mandelbaum.at Kultur(en) des Exils: Literatur, bildende Kunst, Architektur, Fotografie, Film, Musik, Theater, Geistesund Naturwissenschaften,...»

«Cultural Competence Education Learn Serve Lead Association of American Medical Colleges Cultural Competence Education for Medical Students What is Cultural Competence? Many definitions of cultural competence have been put forward, but probably the most widely accepted is the following: Cultural and linguistic competence is a set of congruent behaviors, knowledge, attitudes, and policies that come together in a system, organization, or among professionals that enables effective work in...»

«NHS Lanarkshire Health Records Policy NHS LANARKSHIRE HEALTH RECORDS POLICY Management and Maintenance, Security, Storage, Distribution and Retention of Health Records Author: Head of Health Records Responsible Lead Executive Director of Public Health Director: Endorsing Body: Information Governance Committee Governance or Assurance Health and Clinical Governance Committee Committee Implementation Date: November 2011 Version Number: V2.4 Review Date: November 2016 Responsible Person Head of...»





 
<<  HOME   |    CONTACTS
2016 www.book.dislib.info - Free e-library - Books, dissertations, abstract

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.