共查询到20条相似文献,搜索用时 0 毫秒
1.
Program evaluation is an important source of information to assist organizations to make “evidence-informed” decisions about program planning and development. The objectives of this study were to identify evaluated strategies used by organizations and program developers to build the program evaluation capacity of their workforce, and to describe success factors and lessons learned. Common elements for successful evaluation capacity building (ECB) include: a tailored strategy based on needs assessment, an organizational commitment to evaluation and ECB, experiential learning, training with a practical element, and some form of ongoing technical support within the workplace. ECB is a relatively new field of endeavor, and, while existing studies in ECB are characterized by lower levels of evidence, they suggest the most successful approaches to ECB are likely to be multifaceted. To build the level of evidence in this field, more rigorous study designs need to be implemented in the future. 相似文献
2.
This study examined the role of the external evaluator as a coach. More specifically, using an evaluative inquiry framework (Preskill and Torres, 1999a, Preskill and Torres, 1999b), it explored the types of coaching that an evaluator employed to promote individual, team and organizational learning. The study demonstrated that evaluation coaching provided a viable means for an organization with a limited budget to conduct evaluations through support of a coach. It also demonstrated how the coaching processes supported the development of evaluation capacity within the organization. By examining coaching models outside of the field of evaluation, this study identified two forms of coaching — results coaching and developmental coaching — that promoted evaluation capacity building and have not been previously discussed in the evaluation literature. 相似文献
3.
This paper describes some of the main challenges of evaluating complex interventions, as well as the implications of such challenges for evaluation capacity building. It discusses lessons learned from a case study of an evaluation of Dancing with Parkinson’s, an organization that provides dance classes to people with Parkinson’s disease in Toronto, Canada. These implications are developed from a realist evaluation lens. Key lessons include the need to develop skills to understand program mechanisms and contexts, recognize multiple models of causality, apply mixed method designs, and ensure the successful scaling up and spread of an intervention. 相似文献
4.
Evaluation capacity building (ECB) is a practice that can help organizations conduct and use evaluations; however, there is little research on the sustainable impact of ECB interventions. This study provides an empirical inquiry into how ECB develops sustained evaluation practice. Interviews were conducted with 15 organizational leaders from non-profits, higher education institutions, and foundations that “bought in” to ECB and were at least six months removed from an ECB contract. The result of this work highlights how sustained evaluation practice developed over time and what these practices looked like in real-world settings. A developmental, iterative cycle for how ECB led organizations to sustain evaluation practice emerged around key components to sustainability. First, leadership supported ECB work and resources were dedicated to evaluation. Staff began to conduct and use evaluation, which led to understanding the benefits of evaluation, and promoted value and buy-in to evaluation among staff. Common barriers and emerging sustainability supports not previously identified by ECB literature—the “personal” factor and ongoing ECB practitioner contact—are described. Practical tips for ECB practitioners to promote sustainability are also detailed. 相似文献
5.
This paper describes the approach and process undertaken to develop evaluation capacity among the leaders of a federally funded undergraduate research program. An evaluation toolkit was developed for Computer and Information Sciences and Engineering1 Research Experiences for Undergraduates2 (CISE REU) programs to address the ongoing need for evaluation capacity among principal investigators who manage program evaluation. The toolkit was the result of collaboration within the CISE REU community with the purpose being to provide targeted instructional resources and tools for quality program evaluation. Challenges were to balance the desire for standardized assessment with the responsibility to account for individual program contexts. Toolkit contents included instructional materials about evaluation practice, a standardized applicant management tool, and a modulated outcomes measure. Resulting benefits from toolkit deployment were having cost effective, sustainable evaluation tools, a community evaluation forum, and aggregate measurement of key program outcomes for the national program. Lessons learned included the imperative of understanding the evaluation context, engaging stakeholders, and building stakeholder trust. Results from project measures are presented along with a discussion of guidelines for facilitating evaluation capacity building that will serve a variety of contexts. 相似文献
6.
This paper is the introductory paper on a forum on evaluation capacity building for enhancing impacts of research on brain disorders. It describes challenges and opportunities of building evaluation capacity among community-based organizations in Ontario involved in enhancing brain health and supporting people living with a brain disorder. Using an example of a capacity building program called the “Evaluation Support Program”, which is run by the Ontario Brain Institute, this forum discusses multiple themes including evaluation capacity building, evaluation culture and evaluation methodologies appropriate for evaluating complex community interventions. The goal of the Evaluation Support Program is to help community-based organizations build the capacity to demonstrate the value that they offer in order to improve, sustain, and spread their programs and activities. One of the features of this forum is that perspectives on the Evaluation Support Program are provided by multiple stakeholders, including the community-based organizations, evaluation team members involved in capacity building, thought leaders in the fields of evaluation capacity building and evaluation culture, and the funders. 相似文献
7.
This paper discusses the Ontario Brain Institute’s theory of change for the Evaluation Support Program, a program designed to enhance the role of community organizations in providing care and services for people living with a brain disorder. This is done by helping community organizations build evaluation capacity and foster the use of evidence to inform their activities and services. Helping organizations to build capacities to track the ‘key ingredients’ of their successes will help ensure that successes are replicated and services can be improved to maximize the benefit that people receive from them. This paper describes the hypothesized outcomes and early impacts of the Evaluation Support Program, as well as how the program will contribute to the field of evaluation capacity building. 相似文献
8.
Accumulating evidence indicates that incorporating youth development (YD) principles, strategies, and supports into an organization promotes positive adult and youth outcomes. However, few validated measures assess this type of capacity. The YMCA commissioned a study to validate its Capacity Assessment for Youth Development Programming (Y-CAP), which examines the organizational infrastructure required to implement YD programs and processes in seven areas. Survey development was an iterative process informed by existing frameworks, instruments, and pilot testing of items. The Y-CAP was reviewed and revised three times prior to this study, with a final round of revisions made at the start of the validation phase as a result of thorough content, survey methodology, and psychometrics reviews. The revised Y-CAP was completed by 123 YMCA implementation teams. Rasch analyses were used to determine the extent to which validity evidence supports the use and interpretation of the Y-CAP scores. Convergent validity was assessed by comparing Y-CAP scales to the Algorhythm staff survey for youth-serving organizations, and focus groups informed the consequential validity of the Y-CAP. The results provide strong evidence for the reliability and validity of the Y-CAP, which can be used to guide continuous quality improvement initiatives that support capacity and functioning in youth-serving organizations and programs. 相似文献
9.
10.
Using developmental evaluation as a system of organizational learning: An example from San Francisco
In the last 20 years, developmental evaluation has emerged as a promising approach to support organizational learning in emergent social programs. Through a continuous system of inquiry, reflection, and application of knowledge, developmental evaluation serves as a system of tools, methods, and guiding principles intended to support constructive organizational learning. However, missing from the developmental evaluation literature is a nuanced framework to guide evaluators in how to elevate the organizational practices and concepts most relevant for emergent programs. In this article, we describe and reflect on work we did to develop, pilot, and refine an integrated pilot framework. Drawing on established developmental evaluation inquiry frameworks and incorporating lessons learned from applying the pilot framework, we put forward the Evaluation-led Learning framework to help fill that gap and encourage others to implement and refine it. We posit that without explicitly incorporating the assessments at the foundation of the Evaluation-led Learning framework, developmental evaluation’s ability to affect organizational learning in productive ways will likely be haphazard and limited. 相似文献
11.
The demand for improved quality of health promotion evaluation and greater capacity to undertake evaluation is growing, yet evidence of the challenges and facilitators to evaluation practice within the health promotion field is lacking. A limited number of evaluation capacity measurement instruments have been validated in government or non-government organisations (NGO), however there is no instrument designed for health promotion organisations. This study aimed to develop and validate an Evaluation Practice Analysis Survey (EPAS) to examine evaluation practices in health promotion organisations. Qualitative interviews, existing frameworks and instruments informed the survey development. Health promotion practitioners from government agencies and NGOs completed the survey (n = 169). Principal components analysis was used to determine scale structure and Cronbach’s α used to estimate internal reliability. Logistic regression was conducted to assess predictive validity of selected EPAS scale. The final survey instrument included 25 scales (125 items). The EPAS demonstrated good internal reliability (α > 0.7) for 23 scales. Dedicated resources and time for evaluation, leadership, organisational culture and internal support for evaluation showed promising predictive validity. The EPAS can be used to describe elements of evaluation capacity at the individual, organisational and system levels and to guide initiatives to improve evaluation practice in health promotion organisations. 相似文献
12.
Policymakers’ demand for increased accountability has compelled organizations to pay more attention to internal evaluation capacity building (ECB). The existing literature about ECB has focused on capacity building experiences and organizational research, with limited attention on challenges that internal evaluation specialists face in building organizational evaluative capacity. To address this knowledge gap, we conducted a Delphi study with evaluation specialists in the United States’ Cooperative Extension Service and developed a consensus on the most pervasive ECB challenges as well as the most useful strategies for overcoming ECB challenges. Challenges identified in this study include limited time and resources, limited understanding of the value of evaluation, evaluation considered as an afterthought, and limited support and buy-in from administrators. Alternatively, strategies found in the study include a shift in an organizational culture where evaluation is appreciated, buy-in and support from administration, clarifying the importance of quality than quantity of evaluations, and a strategic approach to ECB. The challenges identified in this study have persisted for decades, meaning administrators must understand the persistence of these issues and make an earnest investment (financial and human resource) to make noticeable progress. The Delphi approach can be used more often to prioritize ECB efforts. 相似文献
13.
Community-based non-profit organizations rarely have access to research or evaluation evidence to inform their programs and often lack the capacity to gather or use this information independently. In 2016, Wisdom2Action—a network of knowledge mobilization (KMb) experts, policy makers and service providers across Canada—launched an inter-organizational mentorship program to facilitate the implementation and sharing of best and promising practices within community-based programs for young people. This article outlines the findings from a developmental evaluation of eight mentoring relationships. Drawing on the Promoting Action on Research in Health Sciences (PARiHS) model of KMb, we look at mentoring as a type of facilitation that supports the increased use of evidence and evaluation information by non-profit organizations and identify key themes that support effective organizational mentorship in this sector. Findings reinforce the importance of establishing connected relationships and understanding context in mentoring relationships, creating adaptive and responsive work plans, ensuring consistent communication, and maintaining a focus on capacity-building if knowledge mobilization is to occur. 相似文献
14.
15.
The purpose of this paper is to understand how intergovernmental organizations and international non-governmental organizations have evaluated their communication activities and adhered to principles of evaluation methodology from 1995–2010 based on a systematic review of available evaluation reports (N = 46) and guidelines (N = 9). Most evaluations were compliant with principle 1 (defining communication objectives), principle 2 (combining evaluation methods), principle 4 (focusing on outcomes) and principle 5 (evaluating for continued improvement). Compliance was least with principle 3 (using a rigorous design) and principle 6 (linking to organizational goals). Evaluation was found not to be integrated, adopted widely or rigorously in these organizations. 相似文献
16.
Value for Money (VfM) is an evaluative question about the merit, worth, and significance of resource use in social programs. Although VfM is a critical component of evidence-based programming, it is often overlooked or avoided by evaluators and decision-makers. A framework for evaluating VfM across the dimensions of economy, effectiveness, efficiency, and equity has emerged in response to limitations of traditional economic evaluation. This framework for assessing VfM integrates methods for engaging stakeholders in evaluative thinking to increase acceptance and utilization of evaluations that address questions of resource use. In this review, we synthesize literature on the VfM framework and position it within a broader theory of Utilization-Focused Evaluation (UFE). We then examine mechanisms through which the VfM framework may contribute to increased evaluation use. Finally, we outline avenues for future research on VfM evaluation. 相似文献
17.
The need for conducting evaluations which reflect of the influence of context on complex programs is increasingly recognized in the field of evaluation. Better data visualization techniques for connecting context with program evaluation data are needed. We share our experience developing a mixed methods timeline to visualize complexity and context with evaluation data. Mixed methods timelines provide a meaningful way to show change over time in both a visually stimulating and accessible format for evaluation audiences. This paper provides an innovative example of using mixed methods timelines to integrate evaluation data with key program activities and milestones, while also showing internal and external contextual influences in one cohesive visual. We present methods and best practices for collecting contextual data and for incorporating a variety of data sources into such a visual. We discuss several strategies to collect and organize context related data including: qualitative interviews, program materials, narrative reports, and member checking with stakeholders and staff. Gathering multiple perspectives is essential to better capture the multi-layered elements of program activities and context. 相似文献
18.
Institutionalization of health promotion interventions occurs when the organization makes changes to support the program as a component of its routine operations. To date there has not been a way to systematically measure institutionalization of health promotion interventions outside of healthcare settings. The purpose of the present study was to develop and evaluate the initial psychometric properties of an instrument to assess institutionalization (i.e., integration) of health activities into faith-based organizations (i.e., churches). This process was informed by previous institutionalization models led by a team of experts and a community-based advisory panel. We recruited African American church leaders (N = 91) to complete a 22-item instrument. An exploratory factor analysis revealed four factors: 1) Organizational Structures (e.g., existing health ministry, health team), 2) Organizational Processes (e.g., records on health activities; instituted health policy), 3) Organizational Resources (e.g., health promotion budget; space for health activities), and 4) Organizational Communication (e.g., health content in church bulletins, discussion of health within sermons) that explained 62.3 % of the variance. The measure, the Faith-Based Organization Health Integration Inventory (FBO-HII), had excellent internal consistency reliability (α = .89) including the subscales (α = .90, .82, .81, and .87). This measure has promising initial psychometric properties for assessing institutionalization of health promotion interventions in faith-based settings. 相似文献
19.
This article illustrates the application of the impact monitoring and evaluation process for the design and development of a performance monitoring and evaluation framework in the context of human and institutional capacity development. This participative process facilitated stakeholder ownership in several areas including the design, development, and use of a new monitoring and evaluation system, as well their targeted results and accomplishments through the use of timely performance data gathered through ongoing monitoring and evaluation. The process produced a performance indicator map, a comprehensive monitoring and evaluation framework, and data collection templates to promote the development, implementation, and sustainability of the monitoring and evaluation system of a farmer's trade union in an African country. 相似文献
20.
BackgroundDespite federal funding for breast cancer screening, fragmented infrastructure and limited organizational capacity hinder access to the full continuum of breast cancer screening and clinical follow-up procedures among rural-residing women. We proposed a regional hub-and-spoke model, partnering with local providers to expand access across North Texas. We describe development and application of an iterative, mixed-method tool to assess county capacity to conduct community outreach and/or patient navigation in a partnership model.MethodsOur tool combined publicly-available quantitative data with qualitative assessments during site visits and semi-structured interviews.ResultsApplication of our tool resulted in shifts in capacity designation in 10 of 17 county partners: 8 implemented local outreach with hub navigation; 9 relied on the hub for both outreach and navigation. Key factors influencing capacity: (1) formal linkages between partner organizations; (2) inter-organizational relationships; (3) existing clinical service protocols; (4) underserved populations. Qualitative data elucidate how our tool captured these capacity changes.ConclusionsOur capacity assessment tool enabled the hub to establish partnerships with county organizations by tailoring support to local capacity and needs. Absent a vertically integrated provider network for preventive services in these rural counties, our tool facilitated a virtually integrated regional network to extend access to breast cancer screening to underserved women. 相似文献