首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In parallel group trials, long‐term efficacy endpoints may be affected if some patients switch or cross over to the alternative treatment arm prior to the event. In oncology trials, switch to the experimental treatment can occur in the control arm following disease progression and potentially impact overall survival. It may be a clinically relevant question to estimate the efficacy that would have been observed if no patients had switched, for example, to estimate ‘real‐life’ clinical effectiveness for a health technology assessment. Several commonly used statistical methods are available that try to adjust time‐to‐event data to account for treatment switching, ranging from naive exclusion and censoring approaches to more complex inverse probability of censoring weighting and rank‐preserving structural failure time models. These are described, along with their key assumptions, strengths, and limitations. Best practice guidance is provided for both trial design and analysis when switching is anticipated. Available statistical software is summarized, and examples are provided of the application of these methods in health technology assessments of oncology trials. Key considerations include having a clearly articulated rationale and research question and a well‐designed trial with sufficient good quality data collection to enable robust statistical analysis. No analysis method is universally suitable in all situations, and each makes strong untestable assumptions. There is a need for further research into new or improved techniques. This information should aid statisticians and their colleagues to improve the design and analysis of clinical trials where treatment switch is anticipated. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
In the past, many clinical trials have withdrawn subjects from the study when they prematurely stopped their randomised treatment and have therefore only collected ‘on‐treatment’ data. Thus, analyses addressing a treatment policy estimand have been restricted to imputing missing data under assumptions drawn from these data only. Many confirmatory trials are now continuing to collect data from subjects in a study even after they have prematurely discontinued study treatment as this event is irrelevant for the purposes of a treatment policy estimand. However, despite efforts to keep subjects in a trial, some will still choose to withdraw. Recent publications for sensitivity analyses of recurrent event data have focused on the reference‐based imputation methods commonly applied to continuous outcomes, where imputation for the missing data for one treatment arm is based on the observed outcomes in another arm. However, the existence of data from subjects who have prematurely discontinued treatment but remained in the study has now raised the opportunity to use this ‘off‐treatment’ data to impute the missing data for subjects who withdraw, potentially allowing more plausible assumptions for the missing post‐study‐withdrawal data than reference‐based approaches. In this paper, we introduce a new imputation method for recurrent event data in which the missing post‐study‐withdrawal event rate for a particular subject is assumed to reflect that observed from subjects during the off‐treatment period. The method is illustrated in a trial in chronic obstructive pulmonary disease (COPD) where the primary endpoint was the rate of exacerbations, analysed using a negative binomial model.  相似文献   

3.
Abstract

A central objective of empirical research on treatment response is to inform treatment choice. Unfortunately, researchers commonly use concepts of statistical inference whose foundations are distant from the problem of treatment choice. It has been particularly common to use hypothesis tests to compare treatments. Wald’s development of statistical decision theory provides a coherent frequentist framework for use of sample data on treatment response to make treatment decisions. A body of recent research applies statistical decision theory to characterize uniformly satisfactory treatment choices, in the sense of maximum loss relative to optimal decisions (also known as maximum regret). This article describes the basic ideas and findings, which provide an appealing practical alternative to use of hypothesis tests. For simplicity, the article focuses on medical treatment with evidence from classical randomized clinical trials. The ideas apply generally, encompassing use of observational data and treatment choice in nonmedical contexts.  相似文献   

4.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms.  相似文献   

6.
Many commonly used statistical methods for data analysis or clinical trial design rely on incorrect assumptions or assume an over‐simplified framework that ignores important information. Such statistical practices may lead to incorrect conclusions about treatment effects or clinical trial designs that are impractical or that do not accurately reflect the investigator's goals. Bayesian nonparametric (BNP) models and methods are a very flexible new class of statistical tools that can overcome such limitations. This is because BNP models can accurately approximate any distribution or function and can accommodate a broad range of statistical problems, including density estimation, regression, survival analysis, graphical modeling, neural networks, classification, clustering, population models, forecasting and prediction, spatiotemporal models, and causal inference. This paper describes 3 illustrative applications of BNP methods, including a randomized clinical trial to compare treatments for intraoperative air leaks after pulmonary resection, estimating survival time with different multi‐stage chemotherapy regimes for acute leukemia, and evaluating joint effects of targeted treatment and an intermediate biological outcome on progression‐free survival time in prostate cancer.  相似文献   

7.
It is suggested that, whenever possible, an experiment be run in a completely randomized fashion. One reason for randomizing Is to protect against violations in the usual linear model assump¬tions. The protection has always been argued on qualitative grounds. This paper quantitatively demonstrates the protection by hypothesizing models in violation of the usual assumptions, mathe¬matically representing the physical act of randomization, and algebraically deriving expected mean squares, EMS, and F tests. It is shown that randomization offers considerable but not com¬plete protection against model violations.

The same methodology is also applied to blocked experiments, i.e. to experiments performed under a specific type of incomplete randomization commonly referred to as blocking. It is shown that blocking offers little protection against certain model viola¬tions. The common practice of representing blocks as a treatment factor applied to the experimental units approximates the form of the EMS derived under the violated assumptions model.  相似文献   

8.
In this paper, we investigate the effect of tuberculosis pericarditis (TBP) treatment on CD4 count changes over time and draw inferences in the presence of missing data. We accounted for missing data and conducted sensitivity analyses to assess whether inferences under missing at random (MAR) assumption are sensitive to not missing at random (NMAR) assumptions using the selection model (SeM) framework. We conducted sensitivity analysis using the local influence approach and stress-testing analysis. Our analyses showed that the inferences from the MAR are robust to the NMAR assumption and influential subjects do not overturn the study conclusions about treatment effects and the dropout mechanism. Therefore, the missing CD4 count measurements are likely to be MAR. The results also revealed that TBP treatment does not interact with HIV/AIDS treatment and that TBP treatment has no significant effect on CD4 count changes over time. Although the methods considered were applied to data in the IMPI trial setting, the methods can also be applied to clinical trials with similar settings.  相似文献   

9.
Some multicenter randomized controlled trials (e.g. for rare diseases or with slow recruitment) involve many centers with few patients in each. Under within-center randomization, some centers might not assign each treatment to at least one patient; hence, such centers have no within-center treatment effect estimates and the center-stratified treatment effect estimate can be inefficient, perhaps to an extent with statistical and ethical implications. Recently, combining complete and incomplete centers with a priori weights has been suggested. However, a concern is whether using the incomplete centers increases bias. To study this concern, an approach with randomization models for a finite population was used to evaluate bias of the usual complete center estimator, the simple center-ignoring estimator, and the weighted estimator combining complete and incomplete centers. The situation with two treatments and many centers, each with either one or two patients, was evaluated. Various patient accrual mechanisms were considered, including one involving selection bias. The usual complete center estimator and the weighted estimator were unbiased under the overall null hypothesis, even with selection bias. An actual dermatology clinical trial motivates and illustrates these methods.  相似文献   

10.
《统计学通讯:理论与方法》2012,41(13-14):2490-2502
The article deals with the constructing methods for experiments carried out in an incomplete split-plot design supplemented by an additional treatment, called a single control. The control treatment has been treated usually as one specific factor level while not necessarily. The control cannot be connected with treatment combinations in an experiment. This distinguishes this article from others in the area considered. The proposed supplementation of whole incomplete split-plot designs leads to the designs with generally accepted methodological requirements, especially randomization. Moreover, we propose a few methods for constructing considered types of the designs with desirable statistical properties such as general balance and efficiency balance of the design with respect to treatment contrasts.  相似文献   

11.
Multiple imputation has emerged as a widely used model-based approach in dealing with incomplete data in many application areas. Gaussian and log-linear imputation models are fairly straightforward to implement for continuous and discrete data, respectively. However, in missing data settings which include a mix of continuous and discrete variables, correct specification of the imputation model could be a daunting task owing to the lack of flexible models for the joint distribution of variables of different nature. This complication, along with accessibility to software packages that are capable of carrying out multiple imputation under the assumption of joint multivariate normality, appears to encourage applied researchers for pragmatically treating the discrete variables as continuous for imputation purposes, and subsequently rounding the imputed values to the nearest observed category. In this article, I introduce a distance-based rounding approach for ordinal variables in the presence of continuous ones. The first step of the proposed rounding process is predicated upon creating indicator variables that correspond to the ordinal levels, followed by jointly imputing all variables under the assumption of multivariate normality. The imputed values are then converted to the ordinal scale based on their Euclidean distances to a set of indicators, with minimal distance corresponding to the closest match. I compare the performance of this technique to crude rounding via commonly accepted accuracy and precision measures with simulated data sets.  相似文献   

12.
Summary.  There is a large literature on methods of analysis for randomized trials with noncompliance which focuses on the effect of treatment on the average outcome. The paper considers evaluating the effect of treatment on the entire distribution and general functions of this effect. For distributional treatment effects, fully non-parametric and fully parametric approaches have been proposed. The fully non-parametric approach could be inefficient but the fully parametric approach is not robust to the violation of distribution assumptions. We develop a semiparametric instrumental variable method based on the empirical likelihood approach. Our method can be applied to general outcomes and general functions of outcome distributions and allows us to predict a subject's latent compliance class on the basis of an observed outcome value in observed assignment and treatment received groups. Asymptotic results for the estimators and likelihood ratio statistic are derived. A simulation study shows that our estimators of various treatment effects are substantially more efficient than the currently used fully non-parametric estimators. The method is illustrated by an analysis of data from a randomized trial of an encouragement intervention to improve adherence to prescribed depression treatments among depressed elderly patients in primary care practices.  相似文献   

13.
High dimensional models are getting much attention from diverse research fields involving very many parameters with a moderate size of data. Model selection is an important issue in such a high dimensional data analysis. Recent literature on theoretical understanding of high dimensional models covers a wide range of penalized methods including LASSO and SCAD. This paper presents a systematic overview of the recent development in high dimensional statistical models. We provide a brief review on the recent development of theory, methods, and guideline on applications of several penalized methods. The review includes appropriate settings to be implemented and limitations along with potential solution for each of the reviewed method. In particular, we provide a systematic review of statistical theory of the high dimensional methods by considering a unified high-dimensional modeling framework together with high level conditions. This framework includes (generalized) linear regression and quantile regression as its special cases. We hope our review helps researchers in this field to have a better understanding of the area and provides useful information to future study.  相似文献   

14.
The trimmed mean is a method of dealing with patient dropout in clinical trials that considers early discontinuation of treatment a bad outcome rather than leading to missing data. The present investigation is the first comprehensive assessment of the approach across a broad set of simulated clinical trial scenarios. In the trimmed mean approach, all patients who discontinue treatment prior to the primary endpoint are excluded from analysis by trimming an equal percentage of bad outcomes from each treatment arm. The untrimmed values are used to calculated means or mean changes. An explicit intent of trimming is to favor the group with lower dropout because having more completers is a beneficial effect of the drug, or conversely, higher dropout is a bad effect. In the simulation study, difference between treatments estimated from trimmed means was greater than the corresponding effects estimated from untrimmed means when dropout favored the experimental group, and vice versa. The trimmed mean estimates a unique estimand. Therefore, comparisons with other methods are difficult to interpret and the utility of the trimmed mean hinges on the reasonableness of its assumptions: dropout is an equally bad outcome in all patients, and adherence decisions in the trial are sufficiently similar to clinical practice in order to generalize the results. Trimming might be applicable to other inter‐current events such as switching to or adding rescue medicine. Given the well‐known biases in some methods that estimate effectiveness, such as baseline observation carried forward and non‐responder imputation, the trimmed mean may be a useful alternative when its assumptions are justifiable.  相似文献   

15.
Data analysis for randomized trials including multi-treatment arms is often complicated by subjects who do not comply with their treatment assignment. We discuss here methods of estimating treatment efficacy for randomized trials involving multi-treatment arms subject to non-compliance. One treatment effect of interest in the presence of non-compliance is the complier average causal effect (CACE) (Angrist et al. 1996), which is defined as the treatment effect for subjects who would comply regardless of the assigned treatment. Following the idea of principal stratification (Frangakis & Rubin 2002), we define principal compliance (Little et al. 2009) in trials with three treatment arms, extend CACE and define causal estimands of interest in this setting. In addition, we discuss structural assumptions needed for estimation of causal effects and the identifiability problem inherent in this setting from both a Bayesian and a classical statistical perspective. We propose a likelihood-based framework that models potential outcomes in this setting and a Bayes procedure for statistical inference. We compare our method with a method of moments approach proposed by Cheng & Small (2006) using a hypothetical data set, and further illustrate our approach with an application to a behavioral intervention study (Janevic et al. 2003).  相似文献   

16.
Monte Carlo simulation methods are increasingly being used to evaluate the property of statistical estimators in a variety of settings. The utility of these methods depends upon the existence of an appropriate data-generating process. Observational studies are increasingly being used to estimate the effects of exposures and interventions on outcomes. Conventional regression models allow for the estimation of conditional or adjusted estimates of treatment effects. There is an increasing interest in statistical methods for estimating marginal or average treatment effects. However, in many settings, conditional treatment effects can differ from marginal treatment effects. Therefore, existing data-generating processes for conditional treatment effects are of little use in assessing the performance of methods for estimating marginal treatment effects. In the current study, we describe and evaluate the performance of two different data-generation processes for generating data with a specified marginal odds ratio. The first process is based upon computing Taylor Series expansions of the probabilities of success for treated and untreated subjects. The expansions are then integrated over the distribution of the random variables to determine the marginal probabilities of success for treated and untreated subjects. The second process is based upon an iterative process of evaluating marginal odds ratios using Monte Carlo integration. The second method was found to be computationally simpler and to have superior performance compared to the first method.  相似文献   

17.
Interval-censored failure time data and panel count data are two types of incomplete data that commonly occur in event history studies and many methods have been developed for their analysis separately (Sun in The statistical analysis of interval-censored failure time data. Springer, New York, 2006; Sun and Zhao in The statistical analysis of panel count data. Springer, New York, 2013). Sometimes one may be interested in or need to conduct their joint analysis such as in the clinical trials with composite endpoints, for which it does not seem to exist an established approach in the literature. In this paper, a sieve maximum likelihood approach is developed for the joint analysis and in the proposed method, Bernstein polynomials are used to approximate unknown functions. The asymptotic properties of the resulting estimators are established and in particular, the proposed estimators of regression parameters are shown to be semiparametrically efficient. In addition, an extensive simulation study was conducted and the proposed method is applied to a set of real data arising from a skin cancer study.  相似文献   

18.
In vitro permeation tests (IVPT) offer accurate and cost-effective development pathways for locally acting drugs, such as topical dermatological products. For assessment of bioequivalence, the FDA draft guidance on generic acyclovir 5% cream introduces a new experimental design, namely the single-dose, multiple-replicate per treatment group design, as IVPT pivotal study design. We examine the statistical properties of its hypothesis testing method—namely the mixed scaled average bioequivalence (MSABE). Meanwhile, some adaptive design features in clinical trials can help researchers make a decision earlier with fewer subjects or boost power, saving resources, while controlling the impact on family-wise error rate. Therefore, we incorporate MSABE in an adaptive design combining the group sequential design and sample size re-estimation. Simulation studies are conducted to study the passing rates of the proposed methods—both within and outside the average bioequivalence limits. We further consider modifications to the adaptive designs applied for IVPT BE trials, such as Bonferroni's adjustment and conditional power function. Finally, a case study with real data demonstrates the advantages of such adaptive methods.  相似文献   

19.
Measuring the efficiency of public services: the limits of analysis   总被引:2,自引:0,他引:2  
Summary.  Policy makers are increasingly seeking to develop overall measures of the effi-ciency of public service organizations. For that, the use of 'off-the-shelf' statistical tools such as data envelopment analysis and stochastic frontier analysis have been advocated as tools to measure organizational efficiency. The analytical sophistication of such methods has reached an advanced stage of development. We discuss the context within which such models are deployed, their underlying assumptions and their usefulness for a regulator of public services. Four specific model building issues are discussed: the weights that are attached to public service outputs; the specification of the statistical model; the treatment of environmental influences on performance; the treatment of dynamic effects. The paper concludes with recommendations for policy makers and researchers on the development and use of efficiency measurement techniques.  相似文献   

20.
We show that assumptions that are sufficient for estimating an average treatment effect in randomized trials with non-compliance restrict the subgroup means for always takers, compliers, defiers and never takers to a two-dimensional linear subspace of a four-dimensional space. Implications and special cases are exemplified.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号