首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Public works programmes (PWPs) are popular social protection instruments in the context of chronic poverty but very little has been published in the way of implementation and outcomes of these programmes. This paper presents a formative process and outcome evaluation of the recovery PWP in Blantyre City, Malawi. The evaluation used longitudinal household survey data of PWP beneficiaries, programme records and interview responses from programme staff and community leaders. Largely, the process evaluation findings showed an agreement between actual and planned activities. The outcome evaluation found indications that the PWP community assets offered some potential benefits to the communities, and that PWP wages allowed the beneficiaries to purchase some food. This however, did not translate into more meals per day, nor did the earnings prevent the decline in household assets as expected. Given a plausible PWP theory and high implementation fidelity, the PWP wage rate or number of days was either just enough to smooth participant income, or insufficient altogether, to enable achievement of more distal outcomes.  相似文献   

2.
Principles-focused evaluations allow evaluators to appraise each principle that guides an organization or program. This study completed a principles-focused evaluation of a new community mental health intervention called Short Term Case Management (STCM) in Toronto, Canada. STCM is a time limited intervention for clients to address unmet needs and personalized goals over 3 months. Findings showcase that a principles-focused evaluation, assessing whether program principles are guiding, useful, inspiring, developmental and/or evaluable (GUIDE), is a practical formative evaluation approach. Specifically, offering an understanding of a novel intervention, including its key components of assessment and planning, support plan implementation and evaluation and care transitions. Findings also highlight that STCM may work best for those clients ready to participate in achieving their own goals. Future research should explore how best to apply the GUIDE framework to complex interventions, including multiple principles, to increase evaluation feasibility and focus.  相似文献   

3.
ABSTRACT

Community collaboration is an exciting way in which community members, multiple agencies, and professionals are organizing to approach the ever-increasing problem of child sexual abuse. This article discusses a formative evaluation of a Child Sexual Abuse Response Team (CSART), an inter-agency approach to responding, to victims of child sexual abuse in Athens-Clarke County, Georgia. The purpose of the formative evaluation was to determine the congruence between the conceptualized collaborative objective of the CSART, as stated in the grant application, and its actual implementation during the first year of program activity. Despite minor incongruence, implementation of the community-based collaborative objective was found to have been achieved m a nondisruptive manner and to have been highly congruent with its conceptualization. Confusion on the part of different agency personnel about roles and responsibilities, particularly during the investigation phase of a report of child sexual abuse, was found to be the major area of incongruence. Additionally, the process of performing a formative evaluation resulted in CSART participants placing more attention on incongruence between conceptualization and implementation of the collaborative objective. A greater congruence between conceptualization and implementation of the collaborative objective thus resulted from CSART members participating in the evaluation process.  相似文献   

4.
Blending high-quality and rigorous research with pure evaluation practice can often be best accomplished through thoughtful collaboration. The evaluation of a high school drug prevention program (All Stars Senior) is an example of how perceived competing purposes and methodologies can coexist to investigate formative and summative outcome variables that can be used for program improvement. Throughout this project there were many examples of client learning from evaluator and evaluator learning from client. This article presents convincing evidence that collaborative evaluation can improve the design, implementation, and findings of the randomized control trial. Throughout this paper, we discuss many examples of good science, good evaluation, and other practical benefits of practicing collaborative evaluation. Ultimately, the authors created the term pre-formative evaluation to describe the period prior to data collection and before program implementation, when collaborative evaluation can inform program improvement.  相似文献   

5.
This case study attempts to illustrate and address in-depth the issues surrounding the collection, analysis, and application of formative research findings to program development and implementation. We provide an in-depth case study of tailoring a program for the residents of Berkshire County, Massachusetts. The formative research process includes collection and analysis of secondary data sources, extensive in-person interviews with community leaders, and in-depth focus groups with members of the population of interest. Findings from the formative research are then applied to tailoring the program materials and presentations and the training of the integrative team of health professionals that offer the program. Distinct components of program are tailored to the realities of the social, cultural, historical, and health and medical contexts in each community while other components of the program are tailored to individual participants. Overall, we believe this case study fully illustrates the utility of formative research in tailoring evidenced-based programs to increase program relevance and positive outcomes while maintaining fidelity to a program’s learning objectives and evaluation. We hope this in-depth account with specific examples proves useful as a guide to others when designing and conducting formative research to tailor health and medical interventions to the audience.  相似文献   

6.
Theory-based evaluation (TBE) is an evaluation method that shows how a program will work under certain conditions and has been supported as a viable, evidence-based option in cases where randomized trials or high-quality quasi-experiments are not feasible. Despite the model's widely accepted theoretical appeal there are few examples of its well-implemented use, probably due to time and money limitations necessary for planning and a confusion over the definitions between research and evaluation functions and roles. In this paper, we describe the development of a theory-based evaluation design in a Math and Science Partnership (MSP) research project funded by the U.S. National Science Foundation (NSF). Through this work we developed an organizational model distinguishing between and integrating evaluation and research functions, explicating personnel roles and responsibilities, and highlighting connections between research and evaluation work. Although the research and evaluation components operated on independent budgeting, staffing, and implementation activities, we were able to combine datasets across activities to allow us to assess the integrity of the program theory, not just the hypothesized connections within it. This model has since been used for proposal development and has been invaluable as it creates a research and evaluation plan that is seamless from the beginning.  相似文献   

7.
Theory-based evaluation (TBE) is an evaluation method that shows how a program will work under certain conditions and has been supported as a viable, evidence-based option in cases where randomized trials or high-quality quasi-experiments are not feasible. Despite the model's widely accepted theoretical appeal there are few examples of its well-implemented use, probably due to time and money limitations necessary for planning and a confusion over the definitions between research and evaluation functions and roles. In this paper, we describe the development of a theory-based evaluation design in a Math and Science Partnership (MSP) research project funded by the U.S. National Science Foundation (NSF). Through this work we developed an organizational model distinguishing between and integrating evaluation and research functions, explicating personnel roles and responsibilities, and highlighting connections between research and evaluation work. Although the research and evaluation components operated on independent budgeting, staffing, and implementation activities, we were able to combine datasets across activities to allow us to assess the integrity of the program theory, not just the hypothesized connections within it. This model has since been used for proposal development and has been invaluable as it creates a research and evaluation plan that is seamless from the beginning.  相似文献   

8.
A portion of the graduate program in clinical community psychology at SUNY Buffalo was subjected to a jury trial as a form of program evaluation. The theory of the trial as evaluation, the problems of implementation, and a posttrial evaluation are discussed. The trial, while time-consuming, especially in its pretrial phases, has the potential for presenting a dramatic picture of a program through the medium of human testimony. The jury was able to arrive at clear decisions on questions put to it, with a high degree of confidence. Decision makers accepted some of the jury's conclusions, and subjective evidence suggests that many of the controversial issues which generated the trial were resolved for the group by the procedure. The posttrial evaluation revealed limitations, such as evidence which was not presented at the trial. The experience proved useful for purposes of interdisciplinary education, providing another lens through which the evaluation problem could be viewed.  相似文献   

9.
Community-based participatory research (CBPR) has been posited as a promising methodology to address health concerns at the community level, including cancer disparities. However, the major criticism to this approach is the lack of scientific grounded evaluation methods to assess development and implementation of this type of research. This paper describes the process of development and implementation of a participatory evaluation framework within a CBPR program to reduce breast, cervical, and colorectal cancer disparities between African Americans and whites in Alabama and Mississippi as well as lessons learned. The participatory process involved community partners and academicians in a fluid process to identify common ground activities and outcomes. The logic model, a lay friendly approach, was used as the template and clearly outlined the steps to be taken in the evaluation process without sacrificing the rigorousness of the evaluation process. We have learned three major lessons in this process: (1) the importance of constant and open dialogue among partners; (2) flexibility to make changes in the evaluation plan and implementation; and (3) importance of evaluators playing the role of facilitators between the community and academicians. Despite the challenges, we offer a viable approach to evaluation of CBPR programs focusing on cancer disparities.  相似文献   

10.
Evaluation research is one of the most rapidly evolving fields of applied behavioral science. As demand for program assessment has increased, the number of alternative evaluation approaches has also grown. As a result, everyday practitioners have often lacked sufficient guidelines for the choice of appropriate evaluation strategies.The present paper articulates an underlying epistemological distinction between (a) experimental evaluation models which simplify program realities in generalizable analyses of discrete causes and effects, and (b) contextual evaluation models which holistically examine particular program operations. These two evaluation approaches are directed at different purposes and are applicable to different program settings. A topology of program characteristics (breadth of goals, scope of treatment, specificity of results, and clarity of theory) is developed and linked to the appropriateness of experimental and contextual evaluation.  相似文献   

11.
Schools, districts, and state-level educational organizations are experiencing a great shift in the way they do the business of education. This shift focuses on accountability, specifically through the expectation of the effective utilization of evaluative-focused efforts to guide and support decisions about educational program implementation. In as much, education leaders need specific guidance and training on how to plan, implement, and use evaluation to critically examine district and school-level initiatives. One specific effort intended to address this need is through the Capacity for Applying Project Evaluation (CAPE) framework. The CAPE framework is composed of three crucial components: a collection of evaluation resources; a professional development model; and a conceptual framework that guides the work to support evaluation planning and implementation in schools and districts. School and district teams serve as active participants in the professional development and ultimately as formative evaluators of their own school or district-level programs by working collaboratively with evaluation experts.The CAPE framework involves the school and district staff in planning and implementing their evaluation. They are the ones deciding what evaluation questions to ask, which instruments to use, what data to collect, and how and to whom results should be reported. Initially this work is done through careful scaffolding by evaluation experts, where supports are slowly pulled away as the educators gain experience and confidence in their knowledge and skills as evaluators. Since CAPE engages all stakeholders in all stages of the evaluation, the philosophical intentions of these efforts to build capacity for formative evaluation strictly aligns with the collaborative evaluation approach.  相似文献   

12.
Schools, districts, and state-level educational organizations are experiencing a great shift in the way they do the business of education. This shift focuses on accountability, specifically through the expectation of the effective utilization of evaluative-focused efforts to guide and support decisions about educational program implementation. In as much, education leaders need specific guidance and training on how to plan, implement, and use evaluation to critically examine district and school-level initiatives. One specific effort intended to address this need is through the Capacity for Applying Project Evaluation (CAPE) framework. The CAPE framework is composed of three crucial components: a collection of evaluation resources; a professional development model; and a conceptual framework that guides the work to support evaluation planning and implementation in schools and districts. School and district teams serve as active participants in the professional development and ultimately as formative evaluators of their own school or district-level programs by working collaboratively with evaluation experts. The CAPE framework involves the school and district staff in planning and implementing their evaluation. They are the ones deciding what evaluation questions to ask, which instruments to use, what data to collect, and how and to whom results should be reported. Initially this work is done through careful scaffolding by evaluation experts, where supports are slowly pulled away as the educators gain experience and confidence in their knowledge and skills as evaluators. Since CAPE engages all stakeholders in all stages of the evaluation, the philosophical intentions of these efforts to build capacity for formative evaluation strictly aligns with the collaborative evaluation approach.  相似文献   

13.
The Southeastern Coastal Center for Agricultural Health and Safety (SCCAHS) is one of many newly-funded federal research centers, housing five multidisciplinary research projects and seven pilot projects, and serving a multi-state region. In the early stages of such a complex project, with multiple teams separated by geography and disciplines, the evaluation program has been integral in connecting internal and external stakeholders at the center and project levels. We used a developmental evaluation (DE) framework to respond to the complex political environment surrounding agricultural health and safety in the southeast; to engage external stakeholders in guiding the center’s research and outreach trajectories; to support center research teams in a co-creation process to develop logic models and tailored indicators; and to provide timely and feedback within the center to address communications gaps identified by the evaluation program. By using DE principles to shape monitoring and evaluation approaches, our evaluation program has adapted to the dynamic circumstances presented as our center’s progress has been translated from a plan in a grant proposal to implementation.  相似文献   

14.
This article outlines the development and implementation of a cost-effective approach to client and program evaluation. The indices presented summarize a client's acquisition of behavioral skills (Skill Acquisition Index) and progression along a continuum of becoming less dependent and more productive (Client Movement Index). Data are presented summarizing how the two non-monetary outcome measures have been used in formative program evaluation. The paper also discusses some of the problems involved in implementing cost-effective analyses in rehabilitation programs and suggests specific strategies to overcome the problems.  相似文献   

15.
The emergence of adversarial models as an approach to formative and summative evaluations is gaining recognition among educational research professionals. The implementation of the Judicial Evaluation Model (JEM), as described in this article, is the first application to a human service employment and training program. Evaluation questions raised within the study were designed to assess the efficacy of linkage arrangements between the Comprehensive Employment and Training Act (CETA) prime sponsors and education service providers in the Commonwealth of Virginia. The four stages of the JEM and its application to CETA are discussed, the panel findings are reported along with noted pitfalls and strengths, suggested guidelines for implementation, and a few recommendations.  相似文献   

16.
This article describes two stages of the Juvenile Justice Educational Enhancement Program's pre-, post-, and longitudinal evaluation research. Pilot studies were used to explore how to design statewide research of pre- and postassessment scores and community reintegration outcomes. Preliminary findings suggest that higher performing educational programs produce greater educational gains as measured by academic achievement tests, credits earned, and pupil progression rates. The findings also indicate that these programs have more students returning to school and lower recidivism rates. Building on the pilot studies, refinements were made to the research designs to enable more comprehensive statewide evaluation. Current research includes collection of pre- and postassessment scores from official sources on approximately 16,000 juvenile justice youths. In addition, a research design has been developed to examine program effectiveness by measuring community reintegration variables. Multiple data sources, including official and self-reported data on family, school, employment, and subsequent crime involvement, will be used in the longitudinal study.  相似文献   

17.
Several evaluation models exist for investigating unintended outcomes, including goal-free and systems evaluation. Yet methods for collecting and analyzing data on unintended outcomes remain under-utilized. Ripple Effects Mapping (REM) is a promising qualitative evaluation method with a wide range of program planning and evaluation applications. In situations where program results are likely to occur over time within complex settings, this method is useful for uncovering both intended and unintended outcomes. REM applies an Appreciative Inquiry facilitation technique to engage stakeholders in visually mapping sequences of program outcomes. Although it has been used to evaluate community development and health promotion initiatives, further methodological guidance for applying REM is still needed. The purpose of this paper is to contribute to the methodological development of evaluating unintended outcomes and extend the foundations of REM by describing steps for integrating it with grounded theory.  相似文献   

18.
This paper addresses the need for models for assessing multicultural programs in the community college, the most culturally and socioeconomically diverse educational institutions in the country. A three-dimensional framework presents faculty, student, and curriculum variables critical to the implementation and outcomes of multicultural programs. The framework emerged from the formative evaluation of a new interdisciplinary social science curriculum and guided the design of the national field test of that curriculum in 30 community college classrooms. Three kinds of results are reported: implementation patterns; appropriateness to faculty members' teaching goals; and impact on reading behavior, interest, overall learning, and political efficacy of students with diverse ages, ethnic and sociocultural backgrounds, and political positions. Political efficacy gains of older students and students with lower socioeconomic backgrounds are discussed. The importance of such a framework in documenting the interaction between a curriculum and its sociocultural context is stressed.  相似文献   

19.
Mental, emotional, and behavioral (MEB) health problems are prevalent globally. Despite effective programs that can prevent MEB problems and promote mental health, there has not been widespread adoption. UPSTREAM! Together was a planning project in three Colorado communities. Communities partnered with academic and policy entities to 1) translate evidence about MEB problem prevention into locally-relevant messages and materials and 2) develop long-term plans for broad implementation of interventions to prevent high-priority MEB problems. Community members recognized the need to talk about MEB problems to prevent them. The UPSTREAM! communities localized messages designed to start conversations and sustain attention on preventing MEB problems. The communities understood that prevention takes sustained community attention and advocacy, knowing that important outcomes may be years away. Long-term implementation plans aimed to strengthen families and enhance social connections among youth. Despite community readiness and capacity to implement evidence-based programs, there were few funding opportunities, delaying program implementation and revealing gaps between funding policies and community readiness. This community-engaged experience suggests an achievable approach, acceptable to communities, and worthy of further development and testing. Policies that cultivate and support local expertise may help to increase wider community adoption of evidence-based programs that promote mental health among youth.  相似文献   

20.
The well-documented disparities in availability, accessibility, and quality of behavioral health services suggest the need for innovative programs to address the needs of ethnic minority youth. The current study aimed to conduct a participatory, formative evaluation of “Working on Womanhood” (WOW), a community-developed, multifaceted, school-based intervention serving primarily ethnic minority girls living in underserved urban communities. Specifically, the current study aimed to examine the feasibility, acceptability, and initial promise of WOW using community-based participatory research (CBPR) and represented the third phase of a community-academic partnership. Qualitative and quantitative data were collected from 960 WOW participants in 21 urban public schools, as well as WOW counselors, parents, and school staff over the course of one academic year. Results demonstrated evidence of acceptability of WOW and noteworthy improvements for WOW participants in targeted outcomes, including mental health, emotion regulation, and academic engagement. Findings also indicated several challenges to implementation feasibility and acceptability, including screening and enrollment processes and curriculum length. Additionally, we discuss how, consistent with participatory and formative research, findings were used by program implementers to inform program improvements, including modifications to screening processes, timelines, curriculum, and trainings – all in preparation for a rigorous effectiveness evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号