首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper provides an introduction to utilities for statisticians working mainly in clinical research who have not had experience of health technology assessment work. Utility is the numeric valuation applied to a health state based on the preference of being in that state relative to perfect health. Utilities are often combined with survival data in health economic modelling to obtain quality‐adjusted life years. There are several methods available for deriving the preference weights and the health states to which they are applied, and combining them to estimate utilities, and the clinical statistician has valuable skills that can be applied in ensuring the robustness of the trial design, data collection and analyses to obtain and handle this data. In addition to raising awareness of the subject and providing source references, the paper outlines the concepts and approaches around utilities using examples, discusses some of the key issues, and proposes areas where statisticians can collaborate with health economic colleagues to improve the quality of this important element of health technology assessment. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
The use of logistic regression analysis is widely applicable to epidemiologic studies concerned with quantifying an association between a study factor (i.e., an exposure variable) and a health outcome (i.e., disease status). This paper reviews the general characteristics of the logistic model and illustrates its use in epidemiologic inquiry. Particular emphasis is given to the control of extraneous variables in the context of follow-up and case-control studies. Techniques for both unconditional and conditional maximum likelihood estimation of the parameters in the logistic model are described and illustrated. A general analysis strategy is also presented which incorporates the assessment of both interaction and confounding in quantifying an exposure-disease association of interest.  相似文献   

3.
In health technology assessment (HTA), beside network meta‐analysis (NMA), indirect comparisons (IC) have become an important tool used to provide evidence between two treatments when no head‐to‐head data are available. Researchers may use the adjusted indirect comparison based on the Bucher method (AIC) or the matching‐adjusted indirect comparison (MAIC). While the Bucher method may provide biased results when included trials differ in baseline characteristics that influence the treatment outcome (treatment effect modifier), this issue may be addressed by applying the MAIC method if individual patient data (IPD) for at least one part of the AIC is available. Here, IPD is reweighted to match baseline characteristics and/or treatment effect modifiers of published data. However, the MAIC method does not provide a solution for situations when several common comparators are available. In these situations, assuming that the indirect comparison via the different common comparators is homogeneous, we propose merging these results by using meta‐analysis methodology to provide a single, potentially more precise, treatment effect estimate. This paper introduces the method to combine several MAIC networks using classic meta‐analysis techniques, it discusses the advantages and limitations of this approach, as well as demonstrates a practical application to combine several (M)AIC networks using data from Phase III psoriasis randomized control trials (RCT).  相似文献   

4.
Variable and model selection problems are fundamental to high-dimensional statistical modeling in diverse fields of sciences. Especially in health studies, many potential factors are usually introduced to determine an outcome variable. This paper deals with the problem of high-dimensional statistical modeling through the analysis of the trauma annual data in Greece for 2005. The data set is divided into the experiment and control sets and consists of 6334 observations and 112 factors that include demographic, transport and intrahospital data used to detect possible risk factors of death. In our study, different model selection techniques are applied to the experiment set and the notion of deviance is used on the control set to assess the fit of the overall selected model. The statistical methods employed in this work were the non-concave penalized likelihood methods, smoothly clipped absolute deviation, least absolute shrinkage and selection operator, and Hard, the generalized linear logistic regression, and the best subset variable selection.The way of identifying the significant variables in large medical data sets along with the performance and the pros and cons of the various statistical techniques used are discussed. The performed analysis reveals the distinct advantages of the non-concave penalized likelihood methods over the traditional model selection techniques.  相似文献   

5.
‘Success’ in drug development is bringing to patients a new medicine that has an acceptable benefit–risk profile and that is also cost‐effective. Cost‐effectiveness means that the incremental clinical benefit is deemed worth paying for by a healthcare system, and it has an important role in enabling manufacturers to obtain new medicines to patients as soon as possible following regulatory approval. Subgroup analyses are increasingly being utilised by decision‐makers in the determination of the cost‐effectiveness of new medicines when making recommendations. This paper highlights the statistical considerations when using subgroup analyses to support cost‐effectiveness for a health technology assessment. The key principles recommended for subgroup analyses supporting clinical effectiveness published by Paget et al. are evaluated with respect to subgroup analyses supporting cost‐effectiveness. A health technology assessment case study is included to highlight the importance of subgroup analyses when incorporated into cost‐effectiveness analyses. In summary, we recommend planning subgroup analyses for cost‐effectiveness analyses early in the drug development process and adhering to good statistical principles when using subgroup analyses in this context. In particular, we consider it important to provide transparency in how subgroups are defined, be able to demonstrate the robustness of the subgroup results and be able to quantify the uncertainty in the subgroup analyses of cost‐effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Recent analyses seeking to explain variation in area health outcomes often consider the impact on them of latent measures (i.e. unobserved constructs) of population health risk. The latter are typically obtained by forms of multivariate analysis, with a small set of latent constructs derived from a collection of observed indicators, and a few recent area studies take such constructs to be spatially structured rather than independent over areas. A confirmatory approach is often applicable to the model linking indicators to constructs, based on substantive knowledge of relevant risks for particular diseases or outcomes. In this paper, population constructs relevant to a particular set of health outcomes are derived using an integrated model containing all the manifest variables, namely health outcome variables, as well as indicator variables underlying the latent constructs. A further feature of the approach is the use of variable selection techniques to select significant loadings and factors (especially in terms of effects of constructs on health outcomes), so ensuring parsimonious models are selected. A case study considers suicide mortality and self-harm contrasts in the East of England in relation to three latent constructs: deprivation, fragmentation and urbanicity.  相似文献   

7.
In assessing biosimilarity between two products, the question to ask is always “How similar is similar?” Traditionally, the equivalence of the means between products is the primary consideration in a clinical trial. This study suggests an alternative assessment for testing a certain percentage of the population of differences lying within a prespecified interval. In doing so, the accuracy and precision are assessed simultaneously by judging whether a two-sided tolerance interval falls within a prespecified acceptance range. We further derive an asymptotic distribution of the tolerance limits to determine the sample size for achieving a targeted level of power. Our numerical study shows that the proposed two-sided tolerance interval test controls the type I error rate and provides sufficient power. A real example is presented to illustrate our proposed approach.  相似文献   

8.
Summary.  Many health surveys conduct an initial household interview to obtain demographic information and then request permission to obtain detailed information on health outcomes from the respondent's health care providers. A 'complete response' results when both the demographic information and the detailed health outcome data are obtained. A 'partial response' results when the initial interview is complete but, for one reason or another, the detailed health outcome information is not obtained. If 'complete responders' differ from 'partial responders' and the proportion of partial responders in the sample is at least moderately large, statistics that use only data from complete responders may be severely biased. We refer to bias that is attributable to these differences as 'partial non-response' bias. In health surveys it is customary to adjust survey estimates to account for potential differences by employing adjustment cells and weighting to reduce bias from partial response. Before making these adjustments, it is important to ask whether an adjustment is expected to increase or decrease bias from partial non-response. After making these adjustments, an equally important question is 'How well does the method of adjustment work to reduce partial non-response bias?'. The paper describes methods for answering these questions. Data from the US National Immunization Survey are used to illustrate the methods.  相似文献   

9.
Uncertainty and sensitivity analyses for systems that involve both stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty are discussed. In such analyses, the dependent variable is usually a complementary cumulative distribution function (CCDF) that arises from stochastic uncertainty; uncertainty analysis involves the determination of a distribution of CCDFs that results from subjective uncertainty, and sensitivity analysis involves the determination of the effects of subjective uncertainty in individual variables on this distribution of CCDFs. Uncertainty analysis is presented as an integration problem involving probability spaces for stochastic and subjective uncertainty. Approximation procedures for the underlying integrals are described that provide an assessment of the effects of stochastic uncertainty, an assessment of the effects of subjective uncertainty, and a basis for performing sensitivity studies. Extensive use is made of Latin hypercube sampling, importance sampling and regression-based sensitivity analysis techniques. The underlying ideas, which are initially presented in an abstract form, are central to the design and performance of real analyses. To emphasize the connection between concept and computational practice, these ideas are illustrated with an analysis involving the MACCS reactor accident consequence model a, performance assessment for the Waste Isolation Pilot Plant, and a probabilistic risk assessment for a nuclear power station.  相似文献   

10.
There are many factors which could influence the level of health of an individual. These factors are interactive and their overall effects on health are usually measured by an index which is called as health index. The health index could also be used as an indicator to describe the health level of a community. Since the health index is important, many research have been done to study its determinant. The main purpose of this study is to model the health index of an individual based on classical structural equation modeling (SEM) and Bayesian SEM. For estimation of the parameters in the measurement and structural equation models, the classical SEM applies the robust-weighted least-square approach, while the Bayesian SEM implements the Gibbs sampler algorithm. The Bayesian SEM approach allows the user to use the prior information for updating the current information on the parameter. Both methods are applied to the data gathered from a survey conducted in Hulu Langat, a district in Malaysia. Based on the classical and the Bayesian SEM, it is found that demographic status and lifestyle are significantly related to the health index. However, mental health has no significant relation to the health index.  相似文献   

11.
We performed a simulation study comparing the statistical properties of the estimated log odds ratio from propensity scores analyses of a binary response variable, in which missing baseline data had been imputed using a simple imputation scheme (Treatment Mean Imputation), compared with three ways of performing multiple imputation (MI) and with a Complete Case analysis. MI that included treatment (treated/untreated) and outcome (for our analyses, outcome was adverse event [yes/no]) in the imputer's model had the best statistical properties of the imputation schemes we studied. MI is feasible to use in situations where one has just a few outcomes to analyze. We also found that Treatment Mean Imputation performed quite well and is a reasonable alternative to MI in situations where it is not feasible to use MI. Treatment Mean Imputation performed better than MI methods that did not include both the treatment and outcome in the imputer's model. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
Benefit-risk assessment is a fundamental element of drug development with the aim to strengthen decision making for the benefit of public health. Appropriate benefit-risk assessment can provide useful information for proactive intervention in health care settings, which could save lives, reduce litigation, improve patient safety and health care outcomes, and furthermore, lower overall health care costs. Recent development in this area presents challenges and opportunities to statisticians in the pharmaceutical industry. We review the development and examine statistical issues in comparative benefit-risk assessment. We argue that a structured benefit-risk assessment should be a multi-disciplinary effort involving experts in clinical science, safety assessment, decision science, health economics, epidemiology and statistics. Well planned and conducted analyses with clear consideration on benefit and risk are critical for appropriate benefit-risk assessment. Pharmaceutical statisticians should extend their knowledge to relevant areas such as pharmaco-epidemiology, decision analysis, modeling, and simulation to play an increasingly important role in comparative benefit-risk assessment.  相似文献   

13.
PROBABILITY-BASED OPTIMAL DESIGN   总被引:1,自引:0,他引:1  
Optimal design of experiments has generally concentrated on parameter estimation and, to a much lesser degree, on model discrimination. Often an experimenter is interested in a particular outcome and wishes to maximize in some way the probability of this outcome. We propose a new class of compound criteria and designs that address this issue for generalized linear models. The criteria offer a method of achieving designs that possess the properties of efficient parameter estimation and a high probability of a desired outcome.  相似文献   

14.
When the outcome of a screening test is expressed by the probabilities of k possible outcomes among individuals with a certain physiologic condition and by the corresponding probabilities among individuals without the condition, the screening usefulness of the test depends on the relative likelihood that its result may properly alter the management of a given patient. New statistical methods are introduced to apply the screening usefulness concept to the assessment of combined or multivalued tests. The method is applied to assess the usefulness of genotypes at cytochrome P450 IAI and glutathione-S-transferase-μ as biomarkers of susceptibility to developing lung cancer. The argument and methods developed should be widely applicable to the statistical assessment of screening tests for a wide range of physiologic conditions.  相似文献   

15.
In parallel group trials, long‐term efficacy endpoints may be affected if some patients switch or cross over to the alternative treatment arm prior to the event. In oncology trials, switch to the experimental treatment can occur in the control arm following disease progression and potentially impact overall survival. It may be a clinically relevant question to estimate the efficacy that would have been observed if no patients had switched, for example, to estimate ‘real‐life’ clinical effectiveness for a health technology assessment. Several commonly used statistical methods are available that try to adjust time‐to‐event data to account for treatment switching, ranging from naive exclusion and censoring approaches to more complex inverse probability of censoring weighting and rank‐preserving structural failure time models. These are described, along with their key assumptions, strengths, and limitations. Best practice guidance is provided for both trial design and analysis when switching is anticipated. Available statistical software is summarized, and examples are provided of the application of these methods in health technology assessments of oncology trials. Key considerations include having a clearly articulated rationale and research question and a well‐designed trial with sufficient good quality data collection to enable robust statistical analysis. No analysis method is universally suitable in all situations, and each makes strong untestable assumptions. There is a need for further research into new or improved techniques. This information should aid statisticians and their colleagues to improve the design and analysis of clinical trials where treatment switch is anticipated. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
In this article, we propose a flexible parametric (FP) approach for adjusting for covariate measurement errors in regression that can accommodate replicated measurements on the surrogate (mismeasured) version of the unobserved true covariate on all the study subjects or on a sub-sample of the study subjects as error assessment data. We utilize the general framework of the FP approach proposed by Hossain and Gustafson in 2009 for adjusting for covariate measurement errors in regression. The FP approach is then compared with the existing non-parametric approaches when error assessment data are available on the entire sample of the study subjects (complete error assessment data) considering covariate measurement error in a multiple logistic regression model. We also developed the FP approach when error assessment data are available on a sub-sample of the study subjects (partial error assessment data) and investigated its performance using both simulated and real life data. Simulation results reveal that, in comparable situations, the FP approach performs as good as or better than the competing non-parametric approaches in eliminating the bias that arises in the estimated regression parameters due to covariate measurement errors. Also, it results in better efficiency of the estimated parameters. Finally, the FP approach is found to perform adequately well in terms of bias correction, confidence coverage, and in achieving appropriate statistical power under partial error assessment data.  相似文献   

17.
Health care audits are crucial in managing the government insurance programs that are estimated to have losses amounting to billions of dollars every year. Statistical methods such as sampling have long been used to handle their size and complexity. Sampling from health care claims data can benefit from multi-stage approaches, especially when the evaluation of the tradeoffs between precision and cost is important. The use of decision models could facilitate health care auditors and policy makers make the best use of these sampling outputs. This paper proposes an integrated multi-stage sampling and decision-making framework that enables auditors address the tradeoffs between audit costs and expected overpayment recovery. We illustrate the framework and discuss insights utilizing a variety of overpayment scenarios for payment populations including U.S. Medicare Part B claims payment data.  相似文献   

18.
This presentation addresses a number of issues pertinent to the collection and management of occupational exposure data and offers suggestions to those persons who are developing systems to handle large volumes of occupational exposure data. A perspective is taken that aims to meet the traditional objectives of industrial hygienists while accommodating epidemiologic needs for linking occupational exposure and health outcome data. The suggestions are based on experience gained through the retrospective use of industrial hygiene data in a large number of epidemiologic studies.  相似文献   

19.
In disease mapping, health outcomes measured at the same spatial locations may be correlated, so one can consider joint modeling the multivariate health outcomes accounting for their dependence. The general approaches often used for joint modeling include shared component models and multivariate models. An alternative way to model the association between two health outcomes, when one outcome can naturally serve as a covariate of the other, is to use ecological regression model. For example, in our application, preterm birth (PTB) can be treated as a predictor for low birth weight (LBW) and vice versa. Therefore, we proposed to blend the ideas from joint modeling and ecological regression methods to jointly model the relative risks for LBW and PTBs over the health districts in Saskatchewan, Canada, in 2000–2010. This approach is helpful when proxy of areal-level contextual factors can be derived based on the outcomes themselves when direct information on risk factors are not readily available. Our results indicate that the proposed approach improves the model fit when compared with the conventional joint modeling methods. Further, we showed that when no strong spatial autocorrelation is present, joint outcome modeling using only independent error terms can still provide a better model fit when compared with the separate modeling.  相似文献   

20.
The draft addendum to the ICH E9 regulatory guideline asks for explicit definition of the treatment effect to be estimated in clinical trials. The draft guideline also introduces the concept of intercurrent events to describe events that occur post‐randomisation that may affect efficacy assessment. Composite estimands allow incorporation of intercurrent events in the definition of the endpoint. A common example of an intercurrent event is discontinuation of randomised treatment and use of a composite strategy would assess treatment effect based on a variable that combines the outcome variable of interest with discontinuation of randomised treatment. Use of a composite estimand may avoid the need for imputation which would be required by a treatment policy estimand. The draft guideline gives the example of a binary approach for specifying a composite estimand. When the variable is measured on a non‐binary scale, other options are available where the intercurrent event is given an extreme unfavourable value, for example comparison of median values or analysis based on categories of response. This paper reviews approaches to deriving a composite estimand and contrasts the use of this estimand to the treatment policy estimand. The benefits of using each strategy are discussed and examples of the use of the different approaches are given for a clinical trial in nasal polyposis and a steroid reduction trial in severe asthma.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号