首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The specification and measurement of program goals remains central to most evaluation research strategies, yet procedures for implementing this approach are not well-articulated. It is the purpose of this discussion to describe a stepwise procedure for programmatic goal setting and monitoring used in a demonstration drug treatment program for women. Three implementation steps are described: (a) goal setting, (b) checking for consistency, (c) monitoring and feedback. The advantages and limitations of this approach are discussed and useful complementary measurement strategies are suggested.  相似文献   

2.
Despite the increased importance of producing social workers who are prepared to evaluate practice, there is a paucity of literature addressing the pedagogical challenges of program evaluation courses. The present study evaluates the structure and pedagogical approach of a newly implemented program evaluation course. The course took a team-based, experiential learning approach to designing a program evaluation plan for students’ field placement agencies. Five student-led focus groups (N = 44) were conducted at the conclusion of the course to address two aims: to assess student perceptions of the content, structure, and delivery of the course; and to assess student attitudes toward applied research. Data were collected between November and December 2014 in a public university’s graduate social work program in the northeastern United States. The steps of grounded theory data analysis were implemented to elicit major themes from student feedback. Results suggest that perceptions of applicability play a central role in shaping student attitudes toward program evaluation. Experiential and team-based pedagogical approaches appear to increase accessibility of course content for students. Suggestions for enhancing student learning in program evaluation courses are discussed.  相似文献   

3.
Described here is a participant-observation strategy employed as one portion of the evaluation research at Family House, a residential treatment program for alcoholic mothers and their children. Major objectives of this leg of a two-legged research design combining this approach with a more standard pre- and post-test strategy were to document and analyze the processes of social development in the new "small society." The ethnographer acted as consultant, with "raw data" generated in the form of daily observations dictated on the rotating basis by staff members over a two-year period. The resulting narrative was coded and analyzed by the anthropologist, utilizing a phenomenological or interpretationist perspective which allowed the "native categories" of significance to emerge and guide the analysis. This novel research strategy is compared with standard ethnographic fieldwork, as well as with pre- and post-test design, in terms of its strengths and weaknesses. Strengths include access to an authentic "inside view" of Family House reality, as well as an unusually rich longitudinal record of daily life in the facility. Problems include danger of contaminating the data by means of the "feedback" provided to reward staff for their cooperation in the research effort. The author concludes, however, that since reactive effects are present in all designs, and since "feedback" is a normal feature of therapeutic milieux, validity is not unusually threatened. Finally, the purposes of evaluation research are scrutinized.  相似文献   

4.
The Teaching-Family Model serves as an example of how research can be used as feedback to change a residential treatment program for youths. Further, research can serve to modify training and evaluation of the program as well. The feedback loop established by continual research and evaluation serves to improve program quality, thus facilitating dissemination as the model is adopted by more agencies.From 1967 to 1980, the Teaching-Family Model expanded from one group home in Kansas to more than 150 homes across the United States. Through both successes and failures, proponents of the Teaching-Family Model learned that research and evaluation can and must be a part of the treatment delivery system rather than an occasional adjunct.  相似文献   

5.
Historically, there has been considerable variability in how formative evaluation has been conceptualized and practiced. FORmative Evaluation Consultation And Systems Technique (FORECAST) is a formative evaluation approach that develops a set of models and processes that can be used across settings and times, while allowing for local adaptations and innovations. FORECAST integrates specific models and tools to improve limitations in program theory, implementation, and evaluation. In the period since its initial use in a federally funded community prevention project in the early 1990s, evaluators have incorporated important formative evaluation innovations into FORECAST, including the integration of feedback loops and proximal outcome evaluation. In addition, FORECAST has been applied in a randomized community research trial. In this article, we describe updates to FORECAST and the implications of FORECAST for ameliorating failures in program theory, implementation, and evaluation.  相似文献   

6.
In this article we argue for a community-based approach as a means of promoting a culture of evaluation. We do this by linking two bodies of knowledge – the 70-year theoretical tradition of community-based research and the trans-discipline of program evaluation – that are seldom intersected within the evaluation capacity building literature. We use the three hallmarks of a community-based research approach (community-determined; equitable participation; action and change) as a conceptual lens to reflect on a case example of an evaluation capacity building program led by the Ontario Brian Institute. This program involved two community-based groups (Epilepsy Southwestern Ontarioand the South West Alzheimer Society Alliance) who were supported by evaluators from the Centre for Community Based Research to conduct their own internal evaluation. The article provides an overview of a community-based research approach and its link to evaluation. It then describes the featured evaluation capacity building initiative, including reflections by the participating organizations themselves. We end by discussing lessons learned and their implications for future evaluation capacity building. Our main argument is that organizations that strive towards a community-based approach to evaluation are well placed to build and sustain a culture of evaluation.  相似文献   

7.
Utilizing a contextual model of evaluation, a goal-oriented method was applied to the Health Psychology program, a doctoral program in its early stages at the University of California, San Francisco. There were five stages involved in implementing this method: (1) clarification of the goals and objectives of the program, (2) prioritizing the objectives, (3) judging the attainment of the objectives, (4) organization of faculty/student input, and (5) feedback to the program management. All faculty members and students were invited to participate as selfevaluators in this evaluation effort. The results indicated that there were significant differences between the faculty group and the student group on their ranking and rating of the importance of specific educational and resource objectives. A one year follow-up was obtained by interviewing the director of the program to assess the impact of the project on program planning. The advantages and disadvantages of the approach were discussed in light of this attempt to analyze a new educational program.  相似文献   

8.
The Southeastern Coastal Center for Agricultural Health and Safety (SCCAHS) is one of many newly-funded federal research centers, housing five multidisciplinary research projects and seven pilot projects, and serving a multi-state region. In the early stages of such a complex project, with multiple teams separated by geography and disciplines, the evaluation program has been integral in connecting internal and external stakeholders at the center and project levels. We used a developmental evaluation (DE) framework to respond to the complex political environment surrounding agricultural health and safety in the southeast; to engage external stakeholders in guiding the center’s research and outreach trajectories; to support center research teams in a co-creation process to develop logic models and tailored indicators; and to provide timely and feedback within the center to address communications gaps identified by the evaluation program. By using DE principles to shape monitoring and evaluation approaches, our evaluation program has adapted to the dynamic circumstances presented as our center’s progress has been translated from a plan in a grant proposal to implementation.  相似文献   

9.
Program review has not received the attention it warrants as a program evaluation tool despite its wide use for evaluation and management purposes. The use of the program review will probably endure on the strength of its face validity and irrespective of other developments in the field of program evaluation. Evaluators should realize this and, accordingly, attempt to improve its effectiveness. This paper presents one organization's approach to achieving this objective through the explication of development principles, implementation guidelines and review items. This paper also discusses benefits that can be expected from a systematic development of this tool and presents various research directions and potentials in this area.  相似文献   

10.
This paper proposes ten steps to make evaluations matter. The ten steps are a combination of the usual recommended practice such as developing program theory and implementing rigorous evaluation designs with a stronger focus on more unconventional steps including developing learning frameworks, exploring pathways of evaluation influence, and assessing spread and sustainability. Consideration of these steps can lead to a focused dialogue between program planners and evaluators and can result in more rigorously planned programs. The ten steps can also help in developing and implementing evaluation designs that have greater potential for policy and programmatic influence. The paper argues that there is a need to go beyond a formulaic approach to program evaluation design that often does not address the complexity of the programs. The complexity of the program will need to inform the design of the evaluation. The ten steps that are described in this paper are heavily informed by a Realist approach to evaluation. The Realist approach attempts to understand what is it about a program that makes it work.  相似文献   

11.
This study of evaluation utilization identified organizational, political, and practical arrangements which facilitated wide use of The Interim Report Evaluation in policy making for California's Early Childhood Education program. In a program fraught by tensions, where evaluations had been political tools, this evaluation was special. The research used a field study approach. Analysis was guided by literature on organizations, policymaking, and evaluation utilization. It identified techniques for maintaining political support, marshalling organizational resources, and designing and disseminating an evaluation that used an ethnographic approach and that was directly applied to policy deliberations. Such techniques have significance when evaluators and researchers need to convince policymakers of their worth. This case study adds to knowledge of the research/policy intersect, on ethnographic evaluation, and on state education policymaking.  相似文献   

12.
A utilization-focused approach in evaluating this program would have placed more emphasis on clearly identifying evaluation users, their information needs, and likely use of findings. With the advantages of hindsight, it appears that such an approach might have led to greater focus on critical implementation issues that affected the eventual decision not to continue the program despite seemingly positive outcomes evaluation findings. Other methods options are also discussed, including the potential usefulness of critical case studies. Finally, there is discussion of how to prepare decision makers for utilization and work with them to interpret and apply findings.  相似文献   

13.
Weaknesses in evaluations often can be traced to structural limitations in the positions of evaluation researchers. Conventional human relations techniques often are an insufficient basis for securing strong support for evaluation research. Strategies for increasing evaluation research leverage are reviewed. Alignment of evaluation research with regulatory bodies with authority to suspend public program expenditures is advocated. Several likely obstacles in the development of the regulatory evaluation model are anticipated and addressed.  相似文献   

14.
Evaluation research is one of the most rapidly evolving fields of applied behavioral science. As demand for program assessment has increased, the number of alternative evaluation approaches has also grown. As a result, everyday practitioners have often lacked sufficient guidelines for the choice of appropriate evaluation strategies.The present paper articulates an underlying epistemological distinction between (a) experimental evaluation models which simplify program realities in generalizable analyses of discrete causes and effects, and (b) contextual evaluation models which holistically examine particular program operations. These two evaluation approaches are directed at different purposes and are applicable to different program settings. A topology of program characteristics (breadth of goals, scope of treatment, specificity of results, and clarity of theory) is developed and linked to the appropriateness of experimental and contextual evaluation.  相似文献   

15.
Theory-based evaluation (TBE) is an evaluation method that shows how a program will work under certain conditions and has been supported as a viable, evidence-based option in cases where randomized trials or high-quality quasi-experiments are not feasible. Despite the model's widely accepted theoretical appeal there are few examples of its well-implemented use, probably due to time and money limitations necessary for planning and a confusion over the definitions between research and evaluation functions and roles. In this paper, we describe the development of a theory-based evaluation design in a Math and Science Partnership (MSP) research project funded by the U.S. National Science Foundation (NSF). Through this work we developed an organizational model distinguishing between and integrating evaluation and research functions, explicating personnel roles and responsibilities, and highlighting connections between research and evaluation work. Although the research and evaluation components operated on independent budgeting, staffing, and implementation activities, we were able to combine datasets across activities to allow us to assess the integrity of the program theory, not just the hypothesized connections within it. This model has since been used for proposal development and has been invaluable as it creates a research and evaluation plan that is seamless from the beginning.  相似文献   

16.
Theory-based evaluation (TBE) is an evaluation method that shows how a program will work under certain conditions and has been supported as a viable, evidence-based option in cases where randomized trials or high-quality quasi-experiments are not feasible. Despite the model's widely accepted theoretical appeal there are few examples of its well-implemented use, probably due to time and money limitations necessary for planning and a confusion over the definitions between research and evaluation functions and roles. In this paper, we describe the development of a theory-based evaluation design in a Math and Science Partnership (MSP) research project funded by the U.S. National Science Foundation (NSF). Through this work we developed an organizational model distinguishing between and integrating evaluation and research functions, explicating personnel roles and responsibilities, and highlighting connections between research and evaluation work. Although the research and evaluation components operated on independent budgeting, staffing, and implementation activities, we were able to combine datasets across activities to allow us to assess the integrity of the program theory, not just the hypothesized connections within it. This model has since been used for proposal development and has been invaluable as it creates a research and evaluation plan that is seamless from the beginning.  相似文献   

17.
Evaluating an innovation for federal, state, and local policymakers and program managers alike entails conflicting demands on the evaluation study. Policymakers at federal, state, and local levels are best assisted by impact evaluations, whereas state and local program managers are best assisted by process evaluations. In-house evaluators often have an advantage in conducting process evaluations; external evaluators generally have an advantage in conducting impact evaluations. A cost-effective approach may be to combine in-house process evaluation and external impact evaluation. This dual approach was found to reduce conflicting demands on the evaluation of an experimental videotex system for agricultural producers.  相似文献   

18.
19.
This article presents an exemplar of a model-guided process evaluation that specifies the treatment model, assesses its implementation, monitors the fidelity of the model throughout the project, assesses model exposure and absorption, and helps understand the program's intermediate effects (proximal outcomes) as well as final effects (distal outcomes). The New Mexico study on office-based prescribing and community pharmacy dispensing of methadone is a research demonstration project that phases a small group of female methadone maintenance patients out of methadone clinics and into a program where they will obtain their scheduled doses of methadone at pharmacies that work in collaboration with physicians and a social worker.The patient's methadone treatment will in this way become part of their overall health care. Early detection of problems of implementation (e.g., the omission of program content or the delivery of inaccurate information) enables the researcher to make adjustments before the problems become unmanageable and the integrity of the original research design is compromised. A model-guided process evaluation can critically inform health services research demonstrations designed for enabling continuous, ongoing feedback and improvement of client-related services.  相似文献   

20.
Community-based participatory research (CBPR) and developmental evaluation (DE) have emerged over recent decades as separate approaches for addressing complex social issues. Current literature offers little with respect to the use of CBPR and DE in combination, although the two approaches are complementary. Through the current paper, we outline how CBPR and DE were used to develop a model of supportive housing for teen families. More specifically, we describe the structures and processes that contributed to this development, including (1) our partnership approach, (2) pooled resources, (3) regular opportunities for collaboration and reflection, (4) integration of multiple data sources, (5) ongoing feedback and knowledge dissemination, and (6) adjustments to program practices. We end by providing insights into the lessons that we learned through this project. Through this paper, we describe how researchers and community partners can collaboratively use CBPR and DE to develop a program model in complex community settings. Insights are offered that will be important for researchers, evaluators, and practitioners seeking to develop programming in response to complex community issues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号