WWW.BOOK.DISLIB.INFO
FREE ELECTRONIC LIBRARY - Books, dissertations, abstract
 
<< HOME
CONTACTS



Pages:     | 1 |   ...   | 11 | 12 || 14 | 15 |   ...   | 62 |

«Development, Security, and Cooperation Policy and Global Affairs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The ...»

-- [ Page 13 ] --

Multipronged approaches to evaluation are being employed by the German Agency for Technical Cooperation (Deutsche Gesellschaft für Technische Zusammenarbeit, GTZ). The range of instruments currently being employed is based on elements of self-evaluation as well as independent and external evaluations. Evaluations aim to address questions of relevance, effectiveness, impact, efficiency, and sustainability.3 These questions are addressed throughout the project’s life span as a means of better understanding the links between inputs and outcome. Commitment by the GTZ to evaluations is demonstrated by the agency’s increased spending on these activities, spending “roughly 1.2 percent of its public benefit turnover on worldwide evaluations—some EUR 9 million a year” (Schmid 2007).

The Swedish Agency for International Development Cooperation (SIDA) is also actively considering ways to improve its evaluation tools.

Since 2005, SIDA has shifted from post-hoc project evaluations to a focus on underlying assumptions and theories; specifically, SIDA is currently conducting a project that “looks at the program theory of a number of different projects in the area. This evaluation focuses on the theoretical constructs that underpin these projects and tries to discern patterns of 3 For further information, see “Working on Sustainable Results: Evaluation at GTZ.” Avail

–  –  –

ideas and assumptions that recur across projects and contexts.” 4 Building on these initial efforts, SIDA hopes to combine the results of this study with others to “make an overall assessment of the field.” The Norwegian Agency for Development Cooperation (NORAD) has also initiated a new strategy for evaluating the effectiveness of its programs in the area of development assistance. The intent of this new strategy, undertaken in 2006, is to “help Norwegian aid administrators learn from experience by systematizing knowledge, whether it is developed by (themselves), in conjunction with others, or entirely by others. Additionally, the evaluation work has a control function to assess the quality of the development cooperation and determine whether resources applied are commensurate with results achieved.”5 Additional attention is being paid to communicating the results of such evaluations with other agencies and stakeholders; this emphasis on communicating results is widely shared in the donor community.

The Danish Ministry of Foreign Affairs has embarked on an extensive study of both its own and multilateral agencies’ evaluations of development and democracy assistance (Danish Ministry of Foreign Affairs 2005).

It has found that evaluations vary greatly in method and value, with many evaluations failing to provide unambiguous determinations of program results. In regard to the United Nations Development Program’s central evaluation office, “its potential for helping strengthen accountability and performance assessment is being underexploited, both for the purpose of accountability and as an essential basis for learning” (Danish Ministry of Foreign Affairs 2005:4).

Finally, the Canadian International Development Agency (CIDA) has been involved in recent efforts to improve evaluation and learning from collective experiences at international assistance in the area of democracy and governance.

In April 1996, as part of its commitment to becoming more resultsoriented, CIDA’s President issued the “Results-Based Management in CIDA—Policy Statement.” This statement consolidated the agency’s experience in implementing Results-Based Management (RBM) and established some of the key terms, basic concepts and implementation principles. It has since served as the basis for the development of a variety of management tools, frameworks, and training programs. The Agency Accountability Framework, approved in July 1998, is another 4 For more information on this project, see SIDA, “Sida’s Work with Democracy and Human Rights.” Available at: http://www.sida.se/sida/jsp/sida.jsp?d=0&a=0&language=en _US. Accessed on September 12, 2007.

5 For more information, see NORAD’s Web site: http://www.norad.no/default.asp?V_ITEM_

–  –  –

key component of the results-based management approach practiced in CIDA. (CIDA 2007) The CIDA report makes an important distinction, however: “The framework articulates CIDA’s accountabilities in terms of developmental results and operational results at the overall agency level, as well as for its various development initiatives. This distinction is crucial... since the former is defined in terms of actual changes achieved in human development through CIDA’s development initiatives, while the latter represents the administration and management of allocated resources (organisational, human, intellectual, physical/material, etc.) aimed at achieving development results.” In short, there is growing agreement—across think tanks, blue-ribbon panels, donor agencies, and foreign ministries—that current evaluation practices in the area of foreign assistance in general, and of democracy assistance in particular, are inadequate to guide policy and that substantial efforts are needed to improve the knowledge base for policy planning.

Thus, USAID is not alone in struggling with these issues.

CURRENT POLICy AND LEgAL FRAMEWORk FOR





USAID Dg ASSESSMENTS AND EvALUATIONS

Current DG policies regarding project assessment and evaluation are shaped in large part by broader USAID and U.S. government policies and regulations. Official USAID polices and procedures are set forth in the Automated Directives System (ADS) on its Web site; Series 200 on “Programming Policy” covers monitoring and evaluation in Section 203 on “Assessing and Learning” (USAID ADS 2007). Of particular importance for this report, in 1995 the USAID leadership decided to eliminate the requirement of a formal evaluation for every major project; instead evaluations would be “driven primarily by management need” (Clapp-Wincek and Blue 2001:1). The prior practice of conducting mainly post-hoc evaluations (which were almost entirely process evaluations), often done by teams of consultants brought in specifically for the task, was seen as too expensive and time consuming to be applied to every project.

As a result of the change, the number of evaluations for all types of USAID assistance, not just DG, has declined, and the approach to evaluation has evolved over time (Clapp-Wincek and Blue 2001). ADS 203.3.6.1 (“When Is an Evaluation Appropriate?”) lists a number of situations that

should require an evaluation:

–  –  –

tive or negative) that should be explained (such as gender differential results);

• Customer, partner, or other informed feedback suggests that there are implementation problems, unmet needs, or unintended consequences or impacts;

• Issues of sustainability, cost effectiveness, or relevance arise;

• The validity of Results Framework hypotheses or critical assumptions is questioned (e.g., due to unanticipated changes in the host country environment);

• Periodic Portfolio Reviews have identified key questions that need to be answered or that need consensus; or

• Extracting lessons is important for the benefit of other Operating Units or future programming (USAID ADS 2007:24).

These evaluations generally remain the traditional process evaluations using teams of outside experts undertaken while a project is under way or after it has been completed.6 The second significant policy shaping USAID evaluation practices is the Government Performance and Results Act (GPRA) of 1993. GPRA “establishes three types of ongoing planning, evaluation, and reporting requirements for executive branch agencies: strategic plans..., annual performance plans, and annual reports on program performance. In complying with GPRA, agencies must set goals, devise performance measures, and then assess results achieved” (McMurtry 2005:1). GPRA has led to the development of an elaborate performance monitoring system across the federal government. Performance monitoring is different from

evaluation; as defined by USAID, for example:

Performance monitoring systems track and alert management as to whether actual results are being achieved as planned. They are built around a hierarchy of objectives logically linking USAID activities and resources to intermediate results and strategic objectives through causeand-effect relationships. For each objective, one or more indicators are selected to measure performance against explicit targets (planned results to be achieved by specific dates). Performance monitoring is an ongoing, routine effort requiring data gathering, analysis, and reporting on results at periodic intervals.

Evaluations are systematic analytical efforts that are planned and conducted in response to specific management questions about performance of USAID-funded development assistance programs or activities. Unlike

–  –  –

performance monitoring, which is ongoing, evaluations are occasional— conducted when needed. Evaluations often focus on why results are or are not being achieved. Or they may address issues such as relevance, effectiveness, efficiency, impact, or sustainability. Often, evaluations provide management with lessons and recommendations for adjustments in program strategies or activities. (USAID 1997:1) To implement the system required by GPRA, every USAID operating unit (missions overseas, bureaus or offices in Washington) must develop strategic objectives (SOs). The DG office created a process for strategic assessments that is often used to inform the development of mission strategies (USAID 2000). Typically, a team of experts, which may include a mix of contractors and USAID personnel, spends several weeks evaluating current conditions in a country with respect to key aspects of democracy and governance and analyzing the opportunities for intervention and impact. This assessment is not quite keyed to the four elements in USAID’s DG goals described in Chapter 1, however. Rather, strategic assessments deal with five areas: consensus, rule of law, competition, inclusion, and good governance. After surveying the degree to which the country has these elements, the assessment considers the key actors and institutions whose behavior or condition needs to change to improve democratic development and then suggests policies—with explicit attention to feasibility given the resources of USAID in that country and country conditions—to promote advances in some or all of these areas. Not every country is assessed and some country assessments may be updated if conditions change enough to warrant a reexamination. Since the formal assessment tool was adopted in late 2000, more than 70 assessments have been conducted in 59 countries.7 To achieve their strategic objectives, all USAID operating units develop a Results Framework and a Program Monitoring Plan that include subobjectives that tie more closely to specific projects (see Figure 2-1 for an illustrative results framework). Depending on the size of its budget and other factors, a mission might have anywhere between one and a dozen SOs, of which one or perhaps two will relate to democracy and governance.

Indicators are used to track progress from the project level through intermediate objectives up to the SO. Missions are required to report their performance against these indicators annually, but below the SO level they can choose which indicators to report and can change the indicators they report each year. Generally, each contract or grant must have an

–  –  –

approved performance monitoring plan, which includes both targets and the indicators that will be used to determine whether the project meets its objectives (USAID ADS 2007). Some implementers also develop and track additional indicators, usually to provide further evidence of achieving project goals.



Pages:     | 1 |   ...   | 11 | 12 || 14 | 15 |   ...   | 62 |


Similar works:

«COMPETITIVE ANALYSIS IDC MarketScape: European Data Protection Software 2012 Vendor Analysis Carla Arend IDC OPINION This IDC study presents a vendor assessment of the European data protection and recovery software market through the IDC MarketScape model. This assessment discusses both quantitative and qualitative characteristics that explain success in this market. This IDC MarketScape covers 10 vendors participating in the European data protection and recovery software market. The evaluation...»

«Aus dem Institut für Sozialmedizin, Epidemiologie und Gesundheitsökonomie der Medizinischen Fakultät Charité – Universitätsmedizin Berlin DISSERTATION Die Abbildung chronischer Schmerzen anhand von validierten Fragebögen – Eine qualitative Studie bei älteren Patienten mit chronischen Schmerzen der Halswirbelsäule zur Erlangung des akademischen Grades Doctor medicinae (Dr. med.) vorgelegt der Medizinischen Fakultät Charité – Universitätsmedizin Berlin von Julia Jerena Karner aus...»

«UNITED STATES ENVIRONMENTAL PROTECTION AGENCY WASHINGTON, D.C. 20460 JUN 8 2012 DEPUTY ADMINISTRATOR MEMORANDUM SUBJECT: U.S. Environmental Protection Agency's Notification of Rules and Regulations Regarding the Demolition of~sbest:s-Containing Structures (~ FROM: Bob Perciasepe / : ; ) ; TO: Assistant Administrators Regional Administrators The Office oflnspector General on December 14, 201 1, issued Report No. 12-P-01 25, Early Warning Report: Use of Unapproved Asbestos Demolition Methods May...»

«Copyright 2003 by the American Society of Clinical Hypnosis American Journal of Clinical Hypnosis 46:1, July 2003 Efficacy vs. Effectiveness Research in Psychotherapy: Implications for Clinical Hypnosis Jon K. Amundson Independent Practice, Calgary, Alberta Assen Alladin Regional Health Authority, Calgary, Alberta Eamon Gill Applied Psychology, University of Calgary Empirically supported therapy (EST) has become a major focus and trend for mental health practice. When hypnosis is involved, this...»

«Faculty of Public Health Guidance for Workplace Assessors GUIDANCE FOR WORKPLACE ASSESSORS VERSION 1.3 Feedback on this document is welcome. Please email comments@fph.org.uk Your feedback will be most effective if you suggest specific changes or additions to the text, or indicate areas where more guidance is needed. This version gives guidance on the 121 core learning outcomes. Guidance on the optional special interest learning outcomes will follow later. Page 1 of 43 Faculty of Public Health...»

«RESOURCE DC HOMICIDE GUIDE Information for Families, Community Organizations and Professionals DC Homicide COALITION Providing leadership and resources for secondary victims of homicide Things to do to Take Care of Yourself Realize that grieving your loved one is a very emotional and exhausting process, but it is part of your healing. You will get through it. • Be realistic – accept your limitations and give yourself time to grieve • Practice healthy living – drink water, exercise, eat...»

«Traktographische Untersuchungen am frontalen Operculum des menschlichen Gehirns Inaugural-Dissertation zur Erlangung des Doktorgrades der Hohen Medizinischen Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn Martin Mercier aus Mönchengladbach Angefertigt mit Genehmigung der Medizinischen Fakultät der Universität Bonn 1. Gutachter: Prof. Dr. Bernd Weber 2. Gutachter: Prof. Dr. Volker Coenen Tag der Mündlichen Prüfung: 28.06.2012 Aus der Epileptologischen Klinik der Rheinischen...»

«1 Technical Report for the reference: Horvath S, Xu X, Lake SL, Silverman EK, Weiss ST, Laird NM (2004) Family based tests for associating haplotypes with general phenotype data: application to asthma genetics. Genet Epidemiol, Vol 26, No 1, 61-69 Family Based Tests for Associating Haplotypes with General Phenotype Data: Application to Asthma Genetics Steve Horvath1,2, Xin Xu3, Stephen L. Lake4, Edwin K. Silverman4,5, Scott T. Weiss3,4, and Nan M. Laird6 Departments of Human Genetics,...»

«INSTRUCTIONS – COMPLETING THE ANNUAL MSP EMPLOYER ACKNOWLEDGEMENT FORM Important Note Under federal law, it is the employer’s responsibility to annually inform its insurer or third-party administrator, such as Blue Cross and Blue Shield of Oklahoma (BCBSOK), of proper employee counts for the purpose of determining payment priority between Medicare and another insurer. In the absence of employer-provided employee counts, CMS requires that the employer’s group health plan coverage be...»

«II. Medizinische Klinik, Asklepios Klinik St. Georg Leitender Arzt Prof. Dr. med. Karl-Heinz Kuck Safety and Outcome Analysis of Transseptal Puncture Facilitated by Phased-Array Intracardiac Echocardiography DISSERTATION For attainment of the degree Doctor of Medicine in the major of Medicine at the University of Hamburg submitted by Erik Wißner Born in Hannover Hamburg 2009 -1Angenommen von der Medizinischen Fakultät der Universität Hamburg am: 3.11.2009 Veröffentlicht mit Genehmigung der...»





 
<<  HOME   |    CONTACTS
2016 www.book.dislib.info - Free e-library - Books, dissertations, abstract

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.