首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article describes the development and implementation of a custom-designed Excel-based visual management tool. The tool’s purpose was to support program planning and evaluation by our resource support team within a paediatric health care setting. Our aims in developing it were to 1) establish a streamlined process and supporting tools to efficiently plan and prioritize program directions and activities; 2) track progress; and 3) evaluate and report on our performance, outputs and outcomes. A collaborative approach based on the ADKAR (Awareness, Desire, Knowledge, Ability, Reinforcement) change management model and the LEADS (Lead self, Engage others, Achieve results, Develop coalitions, Systems transformation) leadership framework was used to guide the design and implementation processes. Team members reported high perceived effectiveness and efficiency with respect to the tool’s utility in supporting its proposed aims. A graded approach to building knowledge and skills in using the tool, to individual responsibility for data entry, and to accountability by team members facilitated its successful implementation. Administrative support is important for sustainability and continual improvement of the tool to address changing team needs over time.  相似文献   

2.
This article reports on an evaluation methodology development study for K-12 school and university partnerships. The method is based on Engeström's (1987). Learning by expanding: An activity-theoretical approach to developmental research. Helsinki: Orienta-Konsultit Oy activity systems analysis that allows researchers to examine qualitative datasets of complex human interactions. This study was designed for participants to evaluate partnership relations and activities. We investigated how the use of activity systems analysis in K-12 school and university partnership evaluation meetings affected participant communication processes. In this study, during a 1-day retreat K-12 school staff and university staff used a modified activity systems model to identify persisting institutional tensions in their program that often trigger miscommunications and strain their relations. During the discussion sessions, study participants collaboratively examined their partnership relations and identified strategies for overcoming difficulties. Additionally, in subsequent monthly partnership meetings during the school year, participants examined findings from the evaluation to design and implement improvement strategies. The results from this methodology development and implementation study provided researchers and partnership participants a new means to (a) evaluate partnership activities, (b) identify institutional barriers, (c) plan future activities, and (d) listen to and incorporate the ideas of less vocal staff members into the planning of future activities.  相似文献   

3.
Canada has a noteworthy reputation for high quality health care. Nonetheless, street youth are one of our most vulnerable yet underserved populations. Consequently, a medical and dental clinic was created in downtown Ottawa, Ontario to respond to their needs. The purpose of this study is to describe a process evaluation of the clinic during its first year of operation with a focus on program fidelity, dose, reach, and satisfaction. A mixed methods approach was used involving interviews with providers, focus groups with street youth, analysis of Electronic Medical Record (EMR) data, and supplemental information such as document reviews. The evaluation identified areas that were working well along with challenges to program implementation. Areas of concerns and possible solutions were presented to the management team that then helped to plan and make improvements to the clinic. Our evaluation design and working relationship with clinic management promoted the integration of real-time evidence into program improvements.  相似文献   

4.
Increased attention has been placed on evaluating the extent to which clinical programs that support the behavioral health needs of youth have effective processes and result in improved patient outcomes. Several theoretical frameworks from dissemination and implementation (D&I) science have been put forth to guide the evaluation of behavioral health program implemented in the context of real-world settings. Although a strong rationale for the integration of D&I science in program evaluation exists, few examples exist available to guide the evaluator in integrating D&I science in the planning and execution of evaluation activities.This paper seeks to inform program evaluation efforts by outlining two D&I frameworks and describing their integration in program evaluation design. Specifically, this paper seeks to support evaluation efforts by illustrating the use of these frameworks via a case example of a telemental health consultation program in pediatric primary care designed to improve access to behavioral health care for children and adolescents in rural settings. Lessons learned from this effort, as well as recommendations regarding the future evaluation of programs using D&I science to support behavioral health care in community-based settings are discussed.  相似文献   

5.
Strategic planning is an essential part of management. However, planning processes can consume great amounts of time and resources that small, nonprofit organizations may lack. Moreover, the process that is used can be tedious and may result in plans that are discarded before or during their implementation. In this article, a strategic planning process is presented that incorporates a Policy Delphi group technique and Situation Structuring, a computer program that assists participants in structuring or defining the problems to be addressed in the plan. The organization to which the process is applied is a small, nonprofit hospice. Both the planning process and an evaluation of the implementation of the resultant strategic plan are examined.  相似文献   

6.
Poor diet and undernutrition are common among children living in Bangladesh. To promote appropriate complementary feeding of young children, an economic development (ED) program involving income-generating asset transfer was implemented alongside a social and behavior change (SBC) program. This paper introduces a collaborative monitoring and evaluation (M&E) system in which diverse collaborators (“research group”, “implementation team”, and “coordinators”) facilitate M&E data acquisition by leveraging their comparative advantages. The implementation team built a monitoring system to track the ED (n = 2960) and SBC participants (n=∼10,000) over 12 months. Based on the baseline design and the monitoring records, the collaborators planned an impact evaluation introducing a quasi-experimental design using two cross-sectional surveys and a prospective cohort survey of child feeding and nutritional status. Using various data sources generated from the M&E system, the collaborators will also reveal the program impact pathway through which each intervention component is delivered, received, and utilized alongside the context-specific facilitators and barriers, including the programs’ uptake. The collaborative M&E system enables the sharing of program goals, strengthens collaborators' commitment to the program, and extends the understanding of the program's progress and evaluation activities.  相似文献   

7.
This article constitutes a case study of the development and implementation of the "results framework," an innovative planning and evaluation tool that is rapidly becoming a standard requirement for United States Agency for International Development (USAID) projects. The framework is used in a USAID-funded regional initiative for HIV/AIDS prevention in Central America. This new program evaluation and monitoring tool provides many advantages over traditional evaluation approaches that use outside consultants to provide midterm and end-of-project evaluations. The results-framework process, which spans the life of the project, provides an opportunity for program staff, donors, partners, and evaluators to work as a team to collect and use rich, longitudinal data for project planning, implementation, and evaluation purposes.  相似文献   

8.
Principles-focused evaluations allow evaluators to appraise each principle that guides an organization or program. This study completed a principles-focused evaluation of a new community mental health intervention called Short Term Case Management (STCM) in Toronto, Canada. STCM is a time limited intervention for clients to address unmet needs and personalized goals over 3 months. Findings showcase that a principles-focused evaluation, assessing whether program principles are guiding, useful, inspiring, developmental and/or evaluable (GUIDE), is a practical formative evaluation approach. Specifically, offering an understanding of a novel intervention, including its key components of assessment and planning, support plan implementation and evaluation and care transitions. Findings also highlight that STCM may work best for those clients ready to participate in achieving their own goals. Future research should explore how best to apply the GUIDE framework to complex interventions, including multiple principles, to increase evaluation feasibility and focus.  相似文献   

9.
Portfolio evaluation is the evaluation of multiple projects with a common purpose. While logic models have been used in many ways to support evaluation, and data visualization has been used widely to present and communicate evaluation findings, adopting logic models for portfolio evaluation and using data visualization to share findings simultaneously is surprisingly limited in the literature. With the data from a sample portfolio of 209 projects which aims to improve the system of early care and education (ECE), this study illustrated how to use logic model and data visualization techniques to conduct a portfolio evaluation by answering two evaluation questions: “To what extent are the elements of a logic model (strategies, sub-strategies, activities, outcomes, and impacts) reflected in the sample portfolio?” and “Which dominant paths through the logic model were illuminated by the data visualization technique?” For the first question, the visualization technique illuminated several dominant strategies, sub-strategies, activities, and outcomes. For the second question, our visualization techniques made it convenient to identify critical paths through the logic model. Implications for both program evaluation and program planning were discussed.  相似文献   

10.
The relationship between program planning and evaluation can be viewed as bidirectional; that is, evaluation methods, procedures, instruments, and criteria not only are determined by, but also influence, program goals and activities. Within the human services context, several factors or sources of reactivity between evaluation and program planning can be identified. These involve (a) quantification of goals and activities, (b) preferences by different audiences for various kinds of evaluation data, (c) values and evaluation criteria, and (d) evaluation requirements and resource availability. Effects of these reactive features are discussed and illustrated with examples drawn from mental health evaluation and accountability practices. It is argued that for evaluation to be a credible and useful practice, evaluators should plan their efforts and assess their own effectiveness within the larger context of human service systems.  相似文献   

11.
Theory-based evaluation (TBE) is an evaluation method that shows how a program will work under certain conditions and has been supported as a viable, evidence-based option in cases where randomized trials or high-quality quasi-experiments are not feasible. Despite the model's widely accepted theoretical appeal there are few examples of its well-implemented use, probably due to time and money limitations necessary for planning and a confusion over the definitions between research and evaluation functions and roles. In this paper, we describe the development of a theory-based evaluation design in a Math and Science Partnership (MSP) research project funded by the U.S. National Science Foundation (NSF). Through this work we developed an organizational model distinguishing between and integrating evaluation and research functions, explicating personnel roles and responsibilities, and highlighting connections between research and evaluation work. Although the research and evaluation components operated on independent budgeting, staffing, and implementation activities, we were able to combine datasets across activities to allow us to assess the integrity of the program theory, not just the hypothesized connections within it. This model has since been used for proposal development and has been invaluable as it creates a research and evaluation plan that is seamless from the beginning.  相似文献   

12.
Theory-based evaluation (TBE) is an evaluation method that shows how a program will work under certain conditions and has been supported as a viable, evidence-based option in cases where randomized trials or high-quality quasi-experiments are not feasible. Despite the model's widely accepted theoretical appeal there are few examples of its well-implemented use, probably due to time and money limitations necessary for planning and a confusion over the definitions between research and evaluation functions and roles. In this paper, we describe the development of a theory-based evaluation design in a Math and Science Partnership (MSP) research project funded by the U.S. National Science Foundation (NSF). Through this work we developed an organizational model distinguishing between and integrating evaluation and research functions, explicating personnel roles and responsibilities, and highlighting connections between research and evaluation work. Although the research and evaluation components operated on independent budgeting, staffing, and implementation activities, we were able to combine datasets across activities to allow us to assess the integrity of the program theory, not just the hypothesized connections within it. This model has since been used for proposal development and has been invaluable as it creates a research and evaluation plan that is seamless from the beginning.  相似文献   

13.
Multi-sectoral programs that involve stakeholders in agriculture, nutrition and health care are essential for responding to nutrition problems such as vitamin A deficiency among pregnant and lactating women and their infants in many poor areas of lower income countries. Yet planning such multi-sectoral programs and designing appropriate evaluations, to respond to different disciplinary cultures of evidence, remain a challenge. We describe the context, program development process, and evaluation design of the Mama SASHA project (Sweetpotato Action for Security and Health in Africa) which promoted production and consumption of a bio-fortified, orange-fleshed sweetpotato (OFSP). In planning the program we drew upon information from needs assessments, stakeholder consultations, and a first round of the implementation evaluation of a pilot project. The multi-disciplinary team worked with partner organizations to develop a program theory of change and an impact pathway which identified aspects of the program that would be monitored and established evaluation methods. Responding to the growing demand for greater rigour in impact evaluations, we carried out quasi-experimental allocation by health facility catchment area, repeat village surveys for assessment of change in intervention and control areas, and longitudinal tracking of individual mother-child pairs. Mid-course corrections in program implementation were informed by program monitoring, regular feedback from implementers and partners’ meetings. To assess economic efficiency and provide evidence for scaling we collected data on resources used and project expenses. Managing the multi-sectoral program and the mixed methods evaluation involved bargaining and trade-offs that were deemed essential to respond to the array of stakeholders, program funders and disciplines involved.  相似文献   

14.
Since the 1970s, the federal government has spearheaded major national education programs to reduce the burden of chronic diseases in the United States. These prevention and disease management programs communicate critical information to the public, those affected by the disease, and health care providers. The National Diabetes Education Program (NDEP), the leading federal program on diabetes sponsored by the National Institutes of Health (NIH) and the Centers for Disease Control and Prevention (CDC), uses primary and secondary quantitative data and qualitative audience research to guide program planning and evaluation. Since 2006, the NDEP has filled the gaps in existing quantitative data sources by conducting its own population-based survey, the NDEP National Diabetes Survey (NNDS). The NNDS is conducted every 2–3 years and tracks changes in knowledge, attitudes and practice indicators in key target audiences. This article describes how the NDEP has used the NNDS as a key component of its evaluation framework and how it applies the survey results for strategic planning and program improvement. The NDEP's use of the NNDS illustrates how a program evaluation framework that includes periodic population-based surveys can serve as an evaluation model for similar national health education programs.  相似文献   

15.
Dione Hills 《Human Relations》1998,51(12):1457-1476
Organizations involved in the development ofinnovative social programs are coming under increasingpressure to establish structures through which theusers, targets, or beneficiaries of program activities can be involved in program planning andimplementation. The dynamic that this sets up betweenthe program and its wider environment, particularly inareas in which new social movements areoperating, can be quite challenging. This article seeks toexamine these dynamics, particularly in the context ofprogram evaluation. It is argued that an engagedevaluation approach (drawing on an action research orientation) has particular strengths in sucha situation, although the limitations and challenges ofsuch an approach are also outlined. It may, for example,require careful management in terms of the dynamics of the evaluation team. The discussionis illustrated by examples drawn from the evaluation ofa European program for disabled people in which theauthor as member of a research team from The Tavistock Institute was involved.  相似文献   

16.
BackgroundHealth inequities are exacerbated when health promotion programs and resources do not reach selected populations. Local health departments (LHDs)1 have the potential to address health equity via engaging priority populations in their work. However, we do not have an understanding of what local agencies are doing on this front.MethodsIn the summer of 2016, we collaborated with informants from thirteen LHDs across North Carolina. Via semi-structured interviews, the research team asked informants about their LHD’s understanding of health equity and engaging priority populations in program planning, implementation, and evaluation.FindingsAll informants discussed that a key function of their LHD was to improve the health of all residents. LHDs with a more comprehensive understanding of health equity engaged members of priority populations in their organizations’ efforts to a greater extent than LHDs with a more limited understanding. Additionally, while all LHDs identified similar barriers to engaging priority populations, LHDs that identified facilitators more comprehensively engaged members of the priority population in program planning, implementation, and evaluation.ConclusionsLHDs are ideally situated between the research and practice worlds to address health equity locally. To promote this work, we should ensure LHDs hold an understanding of health equity, have the means to realize facilitators of health equity work, and recognize the complex context in which health equity work exists.  相似文献   

17.
Program evaluation is an important source of information to assist organizations to make “evidence-informed” decisions about program planning and development. The objectives of this study were to identify evaluated strategies used by organizations and program developers to build the program evaluation capacity of their workforce, and to describe success factors and lessons learned. Common elements for successful evaluation capacity building (ECB) include: a tailored strategy based on needs assessment, an organizational commitment to evaluation and ECB, experiential learning, training with a practical element, and some form of ongoing technical support within the workplace. ECB is a relatively new field of endeavor, and, while existing studies in ECB are characterized by lower levels of evidence, they suggest the most successful approaches to ECB are likely to be multifaceted. To build the level of evidence in this field, more rigorous study designs need to be implemented in the future.  相似文献   

18.
Data from large-scale registers is often underutilized when evaluating addiction treatment programs. Since many programs collect register data regarding clients and interventions, there is a potential to make greater use of such records for program evaluation. The purpose of this article is to discuss the value of using large-scale registers in the evaluation and program planning of addiction treatment systems and programs. Sweden is used as an example of a country where register data is both available and is starting to be used in national evaluation and program planning efforts.The article focuses on possibilities, limitations and practicalities when using large-scale register data to conduct evaluations and program planning of addiction treatment programs. Main conclusions are that using register data for evaluation provides large amounts of data at low cost, limitations associated to the use of register data may be handled statistically, register data can answer important questions in planning of addiction treatment programs, and more accurate measures are needed to account for the diversity of client populations.  相似文献   

19.
This study presents an evaluation of the implementation quality of the Chilean program Crecer Jugando (CJ), a 16 weekly sessions group-based parenting program for children 0 to 4 years old and their primary caregivers aiming at promoting positive caregiver-child interaction. The implementation of CJ in two public health care centers (HCC) in Chile’s Metropolitan Region was assessed based on Donabedian’s theoretical model, focusing on the dimensions of the program’s structure (e.g., infrastructure and supplies), processes (e.g., coordination of CJ team with the HCCs, participants’ attendance, CJ team interaction with participating children), and preliminary outcomes (i.e., parenting stress, caregiver-child interaction). A total of 63 main caregiver-child dyads participated in the study, which took place over a six-month period. Results indicated that the CJ program was feasible to be implemented in two HCCs and would benefit from improving the coordination with the HCCs and the quality of interaction of the CJ team with participating children. After participation in the CJ program, caregivers showed a decrease in their parenting stress. Lessons learned are discussed.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号