首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Like artisans in a professional guild, we evaluators create tools to suit our ever evolving practice. The tools we use as evaluators are the primary artifacts of our profession, reflect our practice and embody an amalgamation of paradigms and assumptions. With the increasing shifts in evaluation purposes from judging program worth to understanding how programs work, the evaluator’s role is changing to that of facilitating stakeholders in a learning process. This involves clarifying purposes and choices, as well as unearthing critical assumptions. In such a role, evaluators become major tool-users and begin to innovate with small refinements or produce completely new tools to fit a specific challenge or context.We interrogate the form and function of 12 tools used by evaluators when working with complex evaluands and complex contexts. The form is described in terms of traditional qualitative techniques and particular characteristics of the elements, use and presentation of each tool. Then the function of each tool is analyzed with respect to articulating assumptions and affecting the agency of evaluators and stakeholders in complex contexts.  相似文献   

2.
Schools, districts, and state-level educational organizations are experiencing a great shift in the way they do the business of education. This shift focuses on accountability, specifically through the expectation of the effective utilization of evaluative-focused efforts to guide and support decisions about educational program implementation. In as much, education leaders need specific guidance and training on how to plan, implement, and use evaluation to critically examine district and school-level initiatives. One specific effort intended to address this need is through the Capacity for Applying Project Evaluation (CAPE) framework. The CAPE framework is composed of three crucial components: a collection of evaluation resources; a professional development model; and a conceptual framework that guides the work to support evaluation planning and implementation in schools and districts. School and district teams serve as active participants in the professional development and ultimately as formative evaluators of their own school or district-level programs by working collaboratively with evaluation experts.The CAPE framework involves the school and district staff in planning and implementing their evaluation. They are the ones deciding what evaluation questions to ask, which instruments to use, what data to collect, and how and to whom results should be reported. Initially this work is done through careful scaffolding by evaluation experts, where supports are slowly pulled away as the educators gain experience and confidence in their knowledge and skills as evaluators. Since CAPE engages all stakeholders in all stages of the evaluation, the philosophical intentions of these efforts to build capacity for formative evaluation strictly aligns with the collaborative evaluation approach.  相似文献   

3.
Schools, districts, and state-level educational organizations are experiencing a great shift in the way they do the business of education. This shift focuses on accountability, specifically through the expectation of the effective utilization of evaluative-focused efforts to guide and support decisions about educational program implementation. In as much, education leaders need specific guidance and training on how to plan, implement, and use evaluation to critically examine district and school-level initiatives. One specific effort intended to address this need is through the Capacity for Applying Project Evaluation (CAPE) framework. The CAPE framework is composed of three crucial components: a collection of evaluation resources; a professional development model; and a conceptual framework that guides the work to support evaluation planning and implementation in schools and districts. School and district teams serve as active participants in the professional development and ultimately as formative evaluators of their own school or district-level programs by working collaboratively with evaluation experts. The CAPE framework involves the school and district staff in planning and implementing their evaluation. They are the ones deciding what evaluation questions to ask, which instruments to use, what data to collect, and how and to whom results should be reported. Initially this work is done through careful scaffolding by evaluation experts, where supports are slowly pulled away as the educators gain experience and confidence in their knowledge and skills as evaluators. Since CAPE engages all stakeholders in all stages of the evaluation, the philosophical intentions of these efforts to build capacity for formative evaluation strictly aligns with the collaborative evaluation approach.  相似文献   

4.
In realist evaluation, where researchers aim to make program theories explicit, they can encounter competing explanations as to how programs work. Managing explanatory tensions from different sources of evidence in multi-stakeholder projects can challenge external evaluators, especially when access to pertinent data, like client records, is mediated by program stakeholders. In this article, we consider two central questions: how can program stakeholder motives shape a realist evaluation project; and how might realist evaluators respond to stakeholders’ belief-motive explanations, including those about program effectiveness, based on factors such as supererogatory commitment or trying together in good faith? Drawing on our realist evaluation of a service reform initiative involving multiple agencies, we describe stakeholder motives at key phases, highlighting a need for tactics and skills that help to manage explanatory tensions. In conclusion, the relevance of stakeholders’ belief-motive explanations (‘we believe the program works’) in realist evaluation is clarified and discussed.  相似文献   

5.
This paper proposes ten steps to make evaluations matter. The ten steps are a combination of the usual recommended practice such as developing program theory and implementing rigorous evaluation designs with a stronger focus on more unconventional steps including developing learning frameworks, exploring pathways of evaluation influence, and assessing spread and sustainability. Consideration of these steps can lead to a focused dialogue between program planners and evaluators and can result in more rigorously planned programs. The ten steps can also help in developing and implementing evaluation designs that have greater potential for policy and programmatic influence. The paper argues that there is a need to go beyond a formulaic approach to program evaluation design that often does not address the complexity of the programs. The complexity of the program will need to inform the design of the evaluation. The ten steps that are described in this paper are heavily informed by a Realist approach to evaluation. The Realist approach attempts to understand what is it about a program that makes it work.  相似文献   

6.
People invited to participate in an evaluation process will inevitably come from a variety of personal backgrounds and hold different views based on their own lived experience. However, evaluators are in a privileged position because they have access to information from a wide range of sources and can play an important role in helping stakeholders to hear and appreciate one another's opinions and ideas. Indeed, in some cases a difference in perspective can be utilised by an evaluator to engage key stakeholders in fruitful discussion that can add value to the evaluation outcome. In other instances the evaluator finds that the task of facilitating positive interaction between multiple stakeholders is just ‘an uphill battle’ and so conflict, rather than consensus, occurs as the evaluation findings emerge and are debated.As noted by Owen [(2006) Program evaluation: Forms and approaches (3rd ed.). St. Leonards, NSW: Allen & Unwin] and other eminent evaluators before him [Fetterman, D. M. (1996). Empowerment evaluation: An introduction to theory and practice. In D. M. Fetterman, S. J. Kaftarian, & A. Wandersman (Eds.), Empowerment evaluation: Knowledge and tools for self-assessment and accountability (pp. 3–46). Thousand Oaks, CA: Sage Publications; Patton, M. Q. (1997). Utilization-focused evaluation (3rd ed.). Thousand Oaks, CA: Sage Publications; Stake, R. A. (1983). Stakeholder influence in the evaluation of cities-in-schools. New Directions for Program Evaluation, 17, 15–30], conflict in an evaluation process is not unexpected. The challenge is for evaluators to facilitate dialogue between people who hold strongly opposing views, with the aim of helping them to achieve a common understanding of the best way forward. However, this does not imply that consensus will be reached [Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage]. What is essential is that the evaluator assists the various stakeholders to recognise and accept their differences and be willing to move on.But the problem is that evaluators are not necessarily equipped with the technical or personal skills required for effective negotiation. In addition, the time and effort that are required to undertake this mediating role are often not sufficiently understood by those who commission a review. With such issues in mind Markiewicz, A. [(2005). A balancing act: Resolving multiple stakeholder interests in program evaluation. Evaluation Journal of Australasia, 4(1–2), 13–21] has proposed six principles upon which to build a case for negotiation to be integrated into the evaluation process. This paper critiques each of these principles in the context of an evaluation undertaken of a youth program. In doing so it challenges the view that stakeholder consensus is always possible if program improvement is to be achieved. This has led to some refinement and further extension of the proposed theory of negotiation that is seen to be instrumental to the role of an evaluator.  相似文献   

7.
The authors, three African-American women trained as collaborative evaluators, offer a comparative analysis of collaborative evaluation (O'Sullivan, 2004) and culturally responsive evaluation approaches (Frierson, Hood, & Hughes, 2002; Kirkhart & Hopson, 2010). Collaborative evaluation techniques immerse evaluators in the cultural milieu of the program, systematically engage stakeholders and integrate their program expertise throughout the evaluation, build evaluation capacity, and facilitate the co-creation of a more complex understanding of programs. However, the authors note that without explicit attention to considerations raised in culturally responsive evaluation approaches (for example, issues of race, power, and privilege), the voices and concerns of marginalized and underserved populations may be acknowledged, but not explicitly or adequately addressed. The intentional application of collaborative evaluation techniques coupled with a culturally responsive stance enhances the responsiveness, validity and utility of evaluations, as well as the cultural competence of evaluators.  相似文献   

8.
9.
The authors, three African-American women trained as collaborative evaluators, offer a comparative analysis of collaborative evaluation (O'Sullivan, 2004) and culturally responsive evaluation approaches (Frierson et al., 2002, Kirkhart and Hopson, 2010). Collaborative evaluation techniques immerse evaluators in the cultural milieu of the program, systematically engage stakeholders and integrate their program expertise throughout the evaluation, build evaluation capacity, and facilitate the co-creation of a more complex understanding of programs. However, the authors note that without explicit attention to considerations raised in culturally responsive evaluation approaches (for example, issues of race, power, and privilege), the voices and concerns of marginalized and underserved populations may be acknowledged, but not explicitly or adequately addressed. The intentional application of collaborative evaluation techniques coupled with a culturally responsive stance enhances the responsiveness, validity and utility of evaluations, as well as the cultural competence of evaluators.  相似文献   

10.
With this series of papers, evaluators are being called to substantiate the rationale and warrant for their own evaluative actions in ways parallel to how evaluators question the logic of program interventions, both as designed and as implemented. This endeavor is timely, appropriate, and important. In these comments, I raise modest questions about the logical constitution of an evaluation theory and about what is missing from a textual reading alone of such theory.  相似文献   

11.
Evaluating an innovation for federal, state, and local policymakers and program managers alike entails conflicting demands on the evaluation study. Policymakers at federal, state, and local levels are best assisted by impact evaluations, whereas state and local program managers are best assisted by process evaluations. In-house evaluators often have an advantage in conducting process evaluations; external evaluators generally have an advantage in conducting impact evaluations. A cost-effective approach may be to combine in-house process evaluation and external impact evaluation. This dual approach was found to reduce conflicting demands on the evaluation of an experimental videotex system for agricultural producers.  相似文献   

12.
13.
14.
One of the most common and most difficult problems confronting program evaluators is the resistance they encounter when attempting to implement evaluation efforts. The philosophical and psychological foundations of this resistance are explored. Problems in the evaluative paradigm are presented. Value differences between evaluators and practitioners are highlighted. Emotional bases for practitioner behavior are clarified and types and loci of resistance are outlined. Suggestions for coping strategies are discussed.  相似文献   

15.
As the number of large federal programs increases, so, too, does the need for a more complete understanding of how to conduct evaluations of such complex programs. The research literature has documented the benefits of stakeholder participation in smaller-scale program evaluations. However, given the scope and diversity of projects in multi-site program evaluations, traditional notions of participatory evaluation do not apply. The purpose of this research is to determine the ways in which stakeholders are involved in large-scale, multi-site STEM evaluations. This article describes the findings from a survey of 313 program leaders and evaluators and from follow-up interviews with 12 of these individuals.Findings from this study indicate that attendance at meetings and conferences, planning discussions within the project related to use of the program evaluation, and participation in data collection should be added to the list of activities that foster feelings of evaluation involvement among stakeholders. In addition, perceptions of involvement may vary according to breadth or depth of evaluation activities, but not always both. Overall, this study suggests that despite the contextual challenges of large, multi-site evaluations, it is feasible to build feelings of involvement among stakeholders.  相似文献   

16.
The ethical work of program evaluators is based on a covenant of honesty and transparency among stakeholders. Yet even under the most favorable evaluation conditions, threats to ethical standards exist and muddle that covenant. Unfortunately, ethical issues associated with different evaluation structures and contracting arrangements have received little attention in the evaluation research literature. This article focuses on the unintended ethical threats associated with multitiered evaluation contexts. After briefly reviewing the various frames through which evaluation theory and ethics are commonly viewed, we discuss ethical challenges associated with multitiered evaluation designs including examples drawn from our evaluation projects. The article concludes with specific findings and recommendations for evaluators, grantors, grantees, and researchers.  相似文献   

17.
Professional evaluators are often called upon to analyze data produced by a catastrophically inadequate evaluation design. This problem is occurring more frequently as accountability pressures force program experts into evaluation activities for which they are not trained. A remedial strategy involving diagnosis of error, application of a corrective procedure and sensitization of program personnel of the need for a more sophisticated stance, is proposed as a solution. A case study is described and the contribution of a remedial strategy to improved evaluation is outlined.  相似文献   

18.
Since the early 1990s, the concept mapping technique developed by William M. K. Trochim has been widely used by evaluators for program development and evaluation and proven to be an invaluable tool for evaluators and program planners. The technique combines qualitative and statistical analysis and is designed to help identify and prioritize the components, dimensions, and particularities of a given reality. The aim of this paper is to propose an alternative way of conducting the statistical analysis to make the technique even more useful and the results easier to interpret. We posit that some methodological choices made at the inception stage of the technique were ill informed, producing maps of participants’ points-of-view that were not optimal representations of their reality. Such a depiction resulted from the statistical analysis process by which multidimensional scaling (MDS) is being applied on the similarity matrix, followed by a hierarchical cluster analysis (HCA) on the Euclidian distances between statements as plotted on the resulting two-dimensional MDS map. As an alternative, we suggest that HCA should be performed first and MDS second, rather than the reverse. To support this proposal, we present three levels of argument: 1) a logical argument backed up by expert opinions on this issue; 2) statistical evidence of the superiority of our proposed approach and 3) the results of a social validation experiment.  相似文献   

19.
20.
Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike "distanced" evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff, result in stronger evaluation designs, enhanced data collection and analysis, and results that stakeholder understand and use. Among similar "participant-oriented" evaluation approaches (Fitzpatrick, Sanders, & Worthen, 2011), Collaborative Evaluation distinguishes itself in that it uses a sliding scale for levels of collaboration. This means that different program evaluations will experience different levels of collaborative activity. The sliding scale is applied as the evaluator considers each program's evaluation needs, readiness, and resources. While Collaborative Evaluation is a term widely used in evaluation, its meaning varies considerably. Often used interchangeably with participatory and/or empowerment evaluation, the terms can be used to mean different things, which can be confusing. The articles use a comparative Collaborative Evaluation Framework to highlight how from a theoretical perspective, Collaborative Evaluation distinguishes itself from the other participatory evaluation approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号