首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted programs. Causal loop diagrams based on a systems thinking approach can better capture a multidimensional, layered program model while providing a more complete understanding of the relationship between program elements, which enables evaluators to examine influences and dependencies between and within program components. Few studies describe how to conceptualize and apply systems models for educational program evaluation. The goal of this paper is to use our NSF-funded, Interdisciplinary GK-12 project: Bringing Authentic Problem Solving in STEM to Rural Middle Schools to illustrate a systems thinking approach to model a complex educational program to aid in evaluation. GK-12 pairs eight teachers with eight STEM doctoral fellows per program year to implement curricula in middle schools. We demonstrate how systems thinking provides added value by modeling the participant groups, instruments, outcomes, and other factors in ways that enhance the interpretation of quantitative and qualitative data. Limitations of the model include added complexity. Implications include better understanding of interactions and outcomes and analyses reflecting interacting or conflicting variables.  相似文献   

2.
Historically, there has been considerable variability in how formative evaluation has been conceptualized and practiced. FORmative Evaluation Consultation And Systems Technique (FORECAST) is a formative evaluation approach that develops a set of models and processes that can be used across settings and times, while allowing for local adaptations and innovations. FORECAST integrates specific models and tools to improve limitations in program theory, implementation, and evaluation. In the period since its initial use in a federally funded community prevention project in the early 1990s, evaluators have incorporated important formative evaluation innovations into FORECAST, including the integration of feedback loops and proximal outcome evaluation. In addition, FORECAST has been applied in a randomized community research trial. In this article, we describe updates to FORECAST and the implications of FORECAST for ameliorating failures in program theory, implementation, and evaluation.  相似文献   

3.
This paper examines the application of Complexity Theory constructs to a research-for-development program evaluation and presents an overview of the implications and promising approaches for evaluating complex programs. We discuss lessons learned from an evaluation completed for the International Development Research Centre’s Food, Environment and Health (FEH) program, which investigated the integration and outcomes of five strategic program priorities: partnerships, southern leadership, gender and equity, scale, and environmental sustainability. We present interpretations from a secondary, thematic content analysis that categorized evaluation findings across four complexity constructs: emergence, unpredictability, contradiction and self-organization. Viewing the evaluation through these constructs surfaced some important features of the FEH program to date, specifically its evolving approach, adaptiveness to emergent issues, non-linear outcomes, and self-organizing agents, which had several implications for the evaluative process. We conclude that the most appropriate evaluation designs for complex funding programs are participatory (to explore all stakeholders' influence), adaptive (to capture the unexpected) and assess external contexts. The application of complexity constructs may be useful for evaluators to gain a deeper understanding of how program contexts change in the face of complexity and why some evaluation methods work more effectively than others.  相似文献   

4.
The present study provides an implementation, process, and immediate outcomes evaluation of the classroom component of Project Towards No Drug Abuse (TND). This project involves development and evaluation of a school-based drug abuse prevention curriculum for continuation high school youth, who are at relatively high risk for drug abuse. Three randomized conditions were evaluated: a standard care, classroom only, and classroom plus school-as-community. The latter condition was an enhanced school-based condition which involved outside-of-classroom meetings and activities. Implementation was high in both program conditions even though this was a higher risk context. Process evaluation data were favorable and did not vary between the two program conditions. Immediate outcomes data (knowledge) was higher in the two program conditions than in the standard care condition. Regarding the classroom program, addition of extra-classroom activities does not appear to alter the quality of delivery of the program.  相似文献   

5.
Like artisans in a professional guild, we evaluators create tools to suit our ever evolving practice. The tools we use as evaluators are the primary artifacts of our profession, reflect our practice and embody an amalgamation of paradigms and assumptions. With the increasing shifts in evaluation purposes from judging program worth to understanding how programs work, the evaluator’s role is changing to that of facilitating stakeholders in a learning process. This involves clarifying purposes and choices, as well as unearthing critical assumptions. In such a role, evaluators become major tool-users and begin to innovate with small refinements or produce completely new tools to fit a specific challenge or context.We interrogate the form and function of 12 tools used by evaluators when working with complex evaluands and complex contexts. The form is described in terms of traditional qualitative techniques and particular characteristics of the elements, use and presentation of each tool. Then the function of each tool is analyzed with respect to articulating assumptions and affecting the agency of evaluators and stakeholders in complex contexts.  相似文献   

6.
Principles-focused evaluations allow evaluators to appraise each principle that guides an organization or program. This study completed a principles-focused evaluation of a new community mental health intervention called Short Term Case Management (STCM) in Toronto, Canada. STCM is a time limited intervention for clients to address unmet needs and personalized goals over 3 months. Findings showcase that a principles-focused evaluation, assessing whether program principles are guiding, useful, inspiring, developmental and/or evaluable (GUIDE), is a practical formative evaluation approach. Specifically, offering an understanding of a novel intervention, including its key components of assessment and planning, support plan implementation and evaluation and care transitions. Findings also highlight that STCM may work best for those clients ready to participate in achieving their own goals. Future research should explore how best to apply the GUIDE framework to complex interventions, including multiple principles, to increase evaluation feasibility and focus.  相似文献   

7.
This article contributes to the growing literature on evaluation and implementation science by examining the interaction between staff perceptions of organizational strength with perceptions and indicators of program fidelity. As part of a pilot project related to the evaluation of the Family to Family initiative, a survey was distributed to employees within two urban child welfare agencies with a total of 410 respondents across both sites, for a combined response rate of 72.2%. Survey results were analyzed both in terms or respondents' perception of their agency as well as in relation to measures of program performance and workload. Multivariate models show that organizational indicators are the most significant and positive predictors of perceived program implementation. Specifically, staff who positively perceived the availability of information within their agency also believed that the programs were well implemented in their agency. These findings suggest that as the value of program changes are articulated within an organization, the implementation of the initiative is perceived to improve.  相似文献   

8.
This paper describes how client exit interviews can assist human service administrators and workers to better understand the outcomes their programs are designed to accomplish. Specifically, the qualitative component of a demonstration family literacy program evaluation is used to illustrate how client input can be used to fine-tune the outcomes component of a program's logic model. An analysis of semi-structured exit interviews with 35 clients, who were randomly selected from all 89 served in the first year of the program, resulted in revision to the program's original logic model, creating explicit ‘testable’ pathways to achieving intended outcomes.  相似文献   

9.
This article constitutes a case study of the development and implementation of the "results framework," an innovative planning and evaluation tool that is rapidly becoming a standard requirement for United States Agency for International Development (USAID) projects. The framework is used in a USAID-funded regional initiative for HIV/AIDS prevention in Central America. This new program evaluation and monitoring tool provides many advantages over traditional evaluation approaches that use outside consultants to provide midterm and end-of-project evaluations. The results-framework process, which spans the life of the project, provides an opportunity for program staff, donors, partners, and evaluators to work as a team to collect and use rich, longitudinal data for project planning, implementation, and evaluation purposes.  相似文献   

10.
Increased attention has been placed on evaluating the extent to which clinical programs that support the behavioral health needs of youth have effective processes and result in improved patient outcomes. Several theoretical frameworks from dissemination and implementation (D&I) science have been put forth to guide the evaluation of behavioral health program implemented in the context of real-world settings. Although a strong rationale for the integration of D&I science in program evaluation exists, few examples exist available to guide the evaluator in integrating D&I science in the planning and execution of evaluation activities.This paper seeks to inform program evaluation efforts by outlining two D&I frameworks and describing their integration in program evaluation design. Specifically, this paper seeks to support evaluation efforts by illustrating the use of these frameworks via a case example of a telemental health consultation program in pediatric primary care designed to improve access to behavioral health care for children and adolescents in rural settings. Lessons learned from this effort, as well as recommendations regarding the future evaluation of programs using D&I science to support behavioral health care in community-based settings are discussed.  相似文献   

11.
Abstract

The continued devolution of social welfare systems and services in the U.S. resultsin high stakes program evaluations in the field of family support and early intervention. Programs are expected to utilize evidence-based interventions and to demonstrate effectiveness. A look at implementation helps to differentiate between theories that do not work and programs that are not effective. Methods for identifying program implementation are needed. In a 17-site program evaluation, the author and her colleagues developed a methodology for measuring implementation and demonstrated the effects of differential implementation in understanding program outcomes.  相似文献   

12.
Evaluation reports increasingly document the degree of program implementation, particularly the extent to which programs adhere to prescribed steps and procedures. Many reports are cursory, however, and few, if any, fully portray the long and winding path taken when developing evaluation instruments, particularly observation instruments. In this article, we describe the development of an observational method for evaluating the degree to which K-12 inquiry science programs are implemented, including the many steps and decisions that occurred during the development, and present evidence for the reliability and validity of the data that we collected with the instrument. The article introduces a method for measuring the adherence of inquiry science implementation and gives evaluators a full picture of what they might expect when developing observation instruments for assessing the degree of program implementation.  相似文献   

13.
This paper describes two case management projects and efforts to evaluate outcomes. Lessons learned from the first effort suggest that a tool referred to as the logic model and a particular approach to evaluation called the “open systems” model might be useful to planners and program evaluators. The logic model provides a means of presenting conditions a program is intended to address, activities that constitute a program, short term outcomes resulting from program activities and long term impacts. The open systems perspective emphasizes evaluation as a tool to be used to achieve program objectives as opposed to establishing cause and effect relationships. This paper also describes how the logic model and open systems evaluation were used to facilitate the development of an evaluation plan for a project designed to assist homeless, substance abusing, pregnant women.  相似文献   

14.
The Safe Schools/Healthy Students (SS/HS) national evaluation seeks to assess both the implementation process and the results of the SS/HS initiative, exploring factors that have contributed to or detracted from grantee success. Each site is required to forge partnerships with representatives from education, mental health, juvenile justice, and law enforcement, coordinating and integrating their efforts and working together to contribute to comparable outcomes (e.g., reduced violence and alcohol and drug use, improved mental health services). The evaluation uses multiple data collection techniques (archival data, surveys, site visits, interviews, and focus groups) from a variety of sources (project directors, community partners, schools, and students) over several years. Certain characteristics of the SS/HS initiative represent unique challenges for the evaluation, including the absence of common metrics for baseline, outcome data, and lack of comparison group. A unifying program theory was required to address these challenges and synthesize the large amounts of qualitative and quantitative information collected. This article stresses the role of program theory in guiding the evaluation.  相似文献   

15.
Multi-site evaluations, particularly of federally funded service programs, pose a special set of challenges for program evaluation. Not only are there contextual differences related to project location, there are often relatively few programmatic requirements, which results in variations in program models, target populations and services. The Jail Diversion and Trauma Recovery–Priority to Veterans (JDTR) National Cross-Site Evaluation was tasked with conducting a multi-site evaluation of thirteen grantee programs that varied along multiple domains. This article describes the use of a mixed methods evaluation design to understand the jail diversion programs and client outcomes for veterans with trauma, mental health and/or substance use problems. We discuss the challenges encountered in evaluating diverse programs, the benefits of the evaluation in the face of these challenges, and offer lessons learned for other evaluators undertaking this type of evaluation.  相似文献   

16.
Schools, districts, and state-level educational organizations are experiencing a great shift in the way they do the business of education. This shift focuses on accountability, specifically through the expectation of the effective utilization of evaluative-focused efforts to guide and support decisions about educational program implementation. In as much, education leaders need specific guidance and training on how to plan, implement, and use evaluation to critically examine district and school-level initiatives. One specific effort intended to address this need is through the Capacity for Applying Project Evaluation (CAPE) framework. The CAPE framework is composed of three crucial components: a collection of evaluation resources; a professional development model; and a conceptual framework that guides the work to support evaluation planning and implementation in schools and districts. School and district teams serve as active participants in the professional development and ultimately as formative evaluators of their own school or district-level programs by working collaboratively with evaluation experts.The CAPE framework involves the school and district staff in planning and implementing their evaluation. They are the ones deciding what evaluation questions to ask, which instruments to use, what data to collect, and how and to whom results should be reported. Initially this work is done through careful scaffolding by evaluation experts, where supports are slowly pulled away as the educators gain experience and confidence in their knowledge and skills as evaluators. Since CAPE engages all stakeholders in all stages of the evaluation, the philosophical intentions of these efforts to build capacity for formative evaluation strictly aligns with the collaborative evaluation approach.  相似文献   

17.
Schools, districts, and state-level educational organizations are experiencing a great shift in the way they do the business of education. This shift focuses on accountability, specifically through the expectation of the effective utilization of evaluative-focused efforts to guide and support decisions about educational program implementation. In as much, education leaders need specific guidance and training on how to plan, implement, and use evaluation to critically examine district and school-level initiatives. One specific effort intended to address this need is through the Capacity for Applying Project Evaluation (CAPE) framework. The CAPE framework is composed of three crucial components: a collection of evaluation resources; a professional development model; and a conceptual framework that guides the work to support evaluation planning and implementation in schools and districts. School and district teams serve as active participants in the professional development and ultimately as formative evaluators of their own school or district-level programs by working collaboratively with evaluation experts. The CAPE framework involves the school and district staff in planning and implementing their evaluation. They are the ones deciding what evaluation questions to ask, which instruments to use, what data to collect, and how and to whom results should be reported. Initially this work is done through careful scaffolding by evaluation experts, where supports are slowly pulled away as the educators gain experience and confidence in their knowledge and skills as evaluators. Since CAPE engages all stakeholders in all stages of the evaluation, the philosophical intentions of these efforts to build capacity for formative evaluation strictly aligns with the collaborative evaluation approach.  相似文献   

18.
The construction industry continues to experience high rates of musculoskeletal injuries despite the widespread promotion of ergonomic solutions. Participatory ergonomics (PE) has been suggested as one approach to engage workers and employers for reducing physical exposures from work tasks but a systematic review of participatory ergonomics programs showed inconclusive results.. A process evaluation is used to monitor and document the implementation of a program and can aid in understanding the relationship between the program elements and the program outcomes. The purpose of this project is to describe a proposed process evaluation for use in a participatory ergonomic training program in construction workers and to evaluate its utility in a demonstration project among floor layers.  相似文献   

19.
Independent evaluation of refugee-focused programs in developed nations is increasingly a mandatory requirement of funding bodies and government agencies. This paper presents an evaluation of the Integrated Services Centre (ISC) Pilot Project that was conducted in Australia in 2007 and early 2008. The purpose of the ISC program was to provide integrated support to humanitarian refugees in settlement, physical health, mental health and employment. The Pilot Project was based in two primary schools in Perth, Western Australia. The evaluation utilized a flexible qualitative ‘engaged’ methodology and included interviews, focus groups and telephone interviews with the key stakeholders, project staff and a small number of refugee families. The strength of the qualitative methodology (including data that is narrative rich) is that it highlights issues as perceived by each stakeholder and provides insights into the daily work by ISC staff that helped to uncover unintended outcomes. Despite the fact that the ISC evaluation was supposed to be a ‘before and after’ design, the researchers acknowledge a common weakness in many evaluations (including the ISC) that when baseline data is required, evaluators are recruited after the project has begun. This issue is discussed in the paper. It is critical that independent evaluators are able to begin collecting baseline data as soon as programs are launched, if not before.  相似文献   

20.
In the last 20 years, developmental evaluation has emerged as a promising approach to support organizational learning in emergent social programs. Through a continuous system of inquiry, reflection, and application of knowledge, developmental evaluation serves as a system of tools, methods, and guiding principles intended to support constructive organizational learning. However, missing from the developmental evaluation literature is a nuanced framework to guide evaluators in how to elevate the organizational practices and concepts most relevant for emergent programs. In this article, we describe and reflect on work we did to develop, pilot, and refine an integrated pilot framework. Drawing on established developmental evaluation inquiry frameworks and incorporating lessons learned from applying the pilot framework, we put forward the Evaluation-led Learning framework to help fill that gap and encourage others to implement and refine it. We posit that without explicitly incorporating the assessments at the foundation of the Evaluation-led Learning framework, developmental evaluation’s ability to affect organizational learning in productive ways will likely be haphazard and limited.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号