首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Health systems are known for being complex. Yet, there is a paucity of evidence about programs that successfully develop competent frontline managers to navigate these complex systems. There is even less evidence about developing frontline managers in areas of contextual complexity such as geographically remote and isolated health services. This study used a customised management development program containing continuous quality improvement (CQI) approaches to determine whether additional levels of evaluation could provide evidence for program impact. Generalisability is limited by the small sample size; however, the findings suggest that continuous improvement approaches, such as action learning workplace-based CQI projects not only provide for real-world application of the manager’s learning; they can potentially produce the type of data needed to conduct evaluations for organisational impact and cost-benefits. The case study contributes to the literature in an area where there is a scarcity of empirical research. Further, this study proposes a pragmatic method for using CQI approaches with existing management development programs to generate the type of data needed for multi-level evaluation.  相似文献   

2.
This research note is based on the evaluation of the Comenius project Teacher‐IN‐SErvice‐Training‐for‐Roma‐inclusion (INSETRom). The project represented an international effort that was undertaken to bridge the gap between Roma and non‐Roma communities and to improve the educational attainment of Roma children in the mainstream educational system. The evaluation of the project showed that such projects can impact a teacher's confidence and attitudes, but that implementing new insights poses many challenges.  相似文献   

3.
Portfolio evaluation is the evaluation of multiple projects with a common purpose. While logic models have been used in many ways to support evaluation, and data visualization has been used widely to present and communicate evaluation findings, adopting logic models for portfolio evaluation and using data visualization to share findings simultaneously is surprisingly limited in the literature. With the data from a sample portfolio of 209 projects which aims to improve the system of early care and education (ECE), this study illustrated how to use logic model and data visualization techniques to conduct a portfolio evaluation by answering two evaluation questions: “To what extent are the elements of a logic model (strategies, sub-strategies, activities, outcomes, and impacts) reflected in the sample portfolio?” and “Which dominant paths through the logic model were illuminated by the data visualization technique?” For the first question, the visualization technique illuminated several dominant strategies, sub-strategies, activities, and outcomes. For the second question, our visualization techniques made it convenient to identify critical paths through the logic model. Implications for both program evaluation and program planning were discussed.  相似文献   

4.

The research literature on triangulation has paid little attention to the problematic of 'making sense of dissonant data'. Nor has there been much discussion around the use of the technique of triangulation when researching families. Through a presentation of research findings gathered from self-report questionnaires and in-depth interviews with couples and families the possibilities of convergent, complementary and dissonant data and their interpretation are explored. The paper reflects on the ontological, epistemological and methodological tensions that must be negotiated when working with triangulated data. It is argued that given the multi-faceted context and intimate subject matter in family and couples research there is a high likelihood of dissonant findings. Recommendations are made for family researchers interested in the technique of triangulation to consider the context and process of their research in the interpretation of their data. Despite the challenges that triangulation throws up for researchers, it is argued that working within a post-positivist paradigm, triangulation enables analysis which is both more complex and more meaningful.  相似文献   

5.
ABSTRACT

Realist evaluation (RE) is a research design increasingly used in program evaluation, that aims to explore and understand the influence of context and underlying mechanisms on intervention or program outcomes. Several methodological challenges, however, are associated with this approach. This article summarizes RE key principles and examines some documented challenges and solutions when analyzing RE data, including the development of Context-Mechanism-Outcome configurations. An analytic method using NVivo features is also presented. This method makes it possible to respond to certain analytic difficulties associated with RE by facilitating the identification of patterns and ensuring transparency in the analytical process.  相似文献   

6.
In realist evaluation, where researchers aim to make program theories explicit, they can encounter competing explanations as to how programs work. Managing explanatory tensions from different sources of evidence in multi-stakeholder projects can challenge external evaluators, especially when access to pertinent data, like client records, is mediated by program stakeholders. In this article, we consider two central questions: how can program stakeholder motives shape a realist evaluation project; and how might realist evaluators respond to stakeholders’ belief-motive explanations, including those about program effectiveness, based on factors such as supererogatory commitment or trying together in good faith? Drawing on our realist evaluation of a service reform initiative involving multiple agencies, we describe stakeholder motives at key phases, highlighting a need for tactics and skills that help to manage explanatory tensions. In conclusion, the relevance of stakeholders’ belief-motive explanations (‘we believe the program works’) in realist evaluation is clarified and discussed.  相似文献   

7.
The Southeastern Coastal Center for Agricultural Health and Safety (SCCAHS) is one of many newly-funded federal research centers, housing five multidisciplinary research projects and seven pilot projects, and serving a multi-state region. In the early stages of such a complex project, with multiple teams separated by geography and disciplines, the evaluation program has been integral in connecting internal and external stakeholders at the center and project levels. We used a developmental evaluation (DE) framework to respond to the complex political environment surrounding agricultural health and safety in the southeast; to engage external stakeholders in guiding the center’s research and outreach trajectories; to support center research teams in a co-creation process to develop logic models and tailored indicators; and to provide timely and feedback within the center to address communications gaps identified by the evaluation program. By using DE principles to shape monitoring and evaluation approaches, our evaluation program has adapted to the dynamic circumstances presented as our center’s progress has been translated from a plan in a grant proposal to implementation.  相似文献   

8.
The Safe Schools/Healthy Students (SS/HS) national evaluation seeks to assess both the implementation process and the results of the SS/HS initiative, exploring factors that have contributed to or detracted from grantee success. Each site is required to forge partnerships with representatives from education, mental health, juvenile justice, and law enforcement, coordinating and integrating their efforts and working together to contribute to comparable outcomes (e.g., reduced violence and alcohol and drug use, improved mental health services). The evaluation uses multiple data collection techniques (archival data, surveys, site visits, interviews, and focus groups) from a variety of sources (project directors, community partners, schools, and students) over several years. Certain characteristics of the SS/HS initiative represent unique challenges for the evaluation, including the absence of common metrics for baseline, outcome data, and lack of comparison group. A unifying program theory was required to address these challenges and synthesize the large amounts of qualitative and quantitative information collected. This article stresses the role of program theory in guiding the evaluation.  相似文献   

9.
10.
Case study research in the fields of Industrial Relations and Sociology of Work has always been including the use of multiple methods and perspectives. However, there is little methodological reflection on the specific challenges of combining different forms of data, methods, theories and researchers within a single study. The confrontation of typical research practices and methodological discussions about triangulation shows: firstly, strict triangulation reaches its limits when doing case studies in this special academic field and, secondly, triangulation can seldomly be seen as a means of validation, but rather as an alternative to it.  相似文献   

11.
This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.  相似文献   

12.
We present the collaborative development of a web-based data collection and monitoring plan for thirty-two county councils within New Mexico's health council system. The monitoring plan, a key component in our multiyear participatory statewide evaluation process, was co-developed with the end users: representatives of the health councils. Guided by the Institute of Medicine's Community, Health Improvement Process framework, we first developed a logic model that delineated processes and intermediate systems-level outcomes in council development, planning, and community action. Through the online system, health councils reported data on intermediate outcomes, including policy changes and funds leveraged. The system captured data that were common across the health council system, yet was also flexible so that councils could report their unique accomplishments at the county level. A main benefit of the online system was that it provided the ability to assess intermediate, outcomes across the health council system. Developing the system was not without challenges, including creating processes to ensure participation across a large rural state; creating shared understanding of intermediate outcomes and indicators; and overcoming technological issues. Even through the challenges, however, the benefits of committing to using participatory processes far outweighed the challenges.  相似文献   

13.
Due to a scarcity of rigorous evaluations and to commence a realist study addressing the lack of knowledge about the workings of interventions directed towards "NEET" youth, this research aimed to understand how and under what circumstances (re)engagement initiatives are expected to facilitate the social integration of young people who are in a situation that prevents them from entering into studies or work. By conducting the first phase in realist evaluation, qualitative interviews with five managerial stakeholders from two northern Swedish initiatives and reviews of documents were carried out for data collection. Using thematic analysis and retroductive reasoning, an intervention-context-actors-mechanisms-outcomes configuration was developed to elicit an initial programme theory that explained how the initiatives were presumed to operate and under what contextual contingencies. The results indicate that the intervention is expected to improve the youths’ wellbeing and engage them in work or studies by strengthening their competence and confidence in a caring and collaborative context. To incorporate the diverse voices and heterogeneous experiences of youth themselves, and ascertain whether the intervention works as intended, for whom, in what conditions and why, the results now need to be tested in selected cases and refined in subsequent phases of evaluation research.  相似文献   

14.
In this paper, we explore some of the methodological challenges that evaluators face in assessing the impacts of complex intervention strategies. We illustrate these challenges, using the specific example of an impact evaluation of one of the six focal areas of the Global Environment Facility; its biodiversity program. The discussion is structured around the concepts of attribution and aggregation, offering the reader a framework for reflection. Subsequently, the paper discusses how theory-based evaluation can provide a basis for addressing the attribution and aggregation challenges presented.  相似文献   

15.
The Community Development Learning Initiative (CDLI) in Calgary, Alberta, Canada aims to be a network that brings together neighbourhood residents, community development practitioners and other supporters to learn and act on neighbourhood-based, citizen-led community development projects. In 2013, the CDLI initiated The Evaluation for Learning and Dialogue Project to provide the opportunity for organizations and supporters to work together to establish a shared vision and goals through discussions about evaluation learning and outcomes. It was intended that the project would be a useful learning tool for participating organizations by enabling them to engage in an evaluative methodological process, and record relevant information and to compare and learn from each other’s projects. Outcome Harvesting was chosen as the evaluation methodology for the project. This article reviews critical learning from the project on the use of Outcome Harvesting methodology in the evaluation learning and outcomes of local community development projects, and it provides lessons for other jurisdictions interested in implementing this methodology.  相似文献   

16.
We argue that the complex, innovative and adaptive nature of Massive Open Online Course (MOOC) initiatives poses particular challenges to monitoring and evaluation, in that any evaluation strategy will need to follow a systems approach. This article aims to guide organizations implementing MOOCs through a series of steps to assist them in developing a strategy to monitor, improve, and judge the merit of their initiatives. We describe how we operationalise our strategy by first defining the different layers of interacting agents in a given MOOC system. We then tailor our approach to these different layers. Specifically, a two-pronged approach was developed, where we suggest that individual projects be assessed through performance monitoring; assessment criteria for which would be defined at the outset to include coverage, participation, quality and student achievement. In contrast, the success of an overall initiative should be considered within a more adaptive, emergent evaluation inquiry framework. We present the inquiry framework we developed for MOOC initiatives, and show how this framework might be used to develop evaluation questions and an assessment methodology. We also define the more fixed indicators and measures for project performance monitoring. Our strategy is described as it was developed to inform the evaluation of a MOOC initiative at the University of Cape Town (UCT), South Africa.  相似文献   

17.
Understanding the tourism‐poverty link is critical if tourism is to be used as a mechanism for reducing poverty. Yet, the available empirical analysis is insufficient for this. This article proposes a research agenda for closing this gap in the literature. It argues that, while analysing the link poses peculiar challenges, models exist to do so. Second, it contends that the key question is not whether the link exists but under what conditions it is strongest. Finally, it maintains that the best way to analyse the link is to incorporate accurate diagnosis and evaluations into tourism projects, using the approaches and concepts of the literature on impact evaluation.  相似文献   

18.
This article reflects on common challenges and lessons learned during the evaluation of gang prevention programs based on case studies of three federally funded Canadian programs. Elements of evaluation design, implementation, data analysis and reporting of results are discussed. More specifically, the article highlights issues that occur when evaluating community projects focused on preventing extreme risks for violence and the complexity of working in potentially dangerous and/or unstable work environments. Topics covered include the problem with quasi-experimental designs, model fidelity adherence, program documentation, client recruitment and retention, and data collection. Recommendations are provided to improve evaluations of youth gang prevention programs and similar community-based interventions that focus on the prevention of youth violence.  相似文献   

19.
Health care is in the midst of a consumer-oriented technology explosion. Individuals of all ages and backgrounds have discovered eHealth. But the challenges of implementing and evaluating eHealth are just beginning to surface, and, as technology changes, new challenges emerge. Evaluation is critical to the future of eHealth. This article addresses four dimensions of eHealth evaluation: (1) design and methodology issues; (2) challenges related to the technology itself; (3) environmental issues that are not specific to eHealth but pose special problems for eHealth researchers; and (4) logistic or administrative concerns of the evaluation methodology selected. We suggest that these four dimensions must be integrated to provide a holistic framework for designing and implementing eHealth research projects, as well as for understanding the totality of the eHealth intervention. The framework must be flexible enough to adapt to a variety of end users, regardless of whether the end user is a healthcare organization, a for-profit business, a community organization, or an individual. The framework is depicted as a puzzle with four interlocking pieces.  相似文献   

20.
In February 2002 the Health e-Technologies Initiative (HETI), a program office of the Robert Wood Johnson Foundation®, was created to advance discovery of scientific knowledge regarding the effectiveness of interactive eHealth applications. This article is the introduction to a series of seven articles written by grantees of HETI which address challenges, lessons learned, and proposed solutions as researchers implement eHealth projects. From this body of work it is clear that the overall process of conducting evaluation research in eHealth requires careful and detailed planning, recognition of the heightened sensitivity of IRBs, and institutions around the electronic collection and communication of personal health information, and a combination of tenacity and creativity to address the inevitable thorny methodological challenges to eHealth research. Use of established guidelines to help standardize the evaluation process, where feasible, is recommended.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号