首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Heavily right-censored time to event, or survival, data arise frequently in research areas such as medicine and industrial reliability. Recently, there have been suggestions that auxiliary outcomes which are more fully observed may be used to “enhance” or increase the efficiency of inferences for a primary survival time variable. However, efficiency gains from this approach have mostly been very small. Most of the situations considered have involved semiparametric models, so in this note we consider two very simple fully parametric models. In the one case involving a correlated auxiliary variable that is always observed, we find that efficiency gains are small unless the response and auxiliary variable are very highly correlated and the response is heavily censored. In the second case, which involves an intermediate stage in a three-stage model of failure, the efficiency gains can be more substantial. We suggest that careful study of specific situations is needed to identify opportunities for “enhanced” inferences, but that substantial gains seem more likely when auxiliary information involves structural information about the failure process.  相似文献   

2.
One of the main goals for a phase II trial is to screen and select the best treatment to proceed onto further studies in a phase III trial. Under the flexible design proposed elsewhere, we discuss for cluster randomization trials sample size calculation with a given desired probability of correct selection to choose the best treatment when one treatment is better than all the others. We develop exact procedures for calculating the minimum required number of clusters with a given cluster size (or the minimum number of patients with a given number of repeated measurements) per treatment. An approximate sample size and the evaluation of its performance for two arms are also given. To help readers employ the results presented here, tables are provided to summarize the resulting minimum required sample sizes for cluster randomization trials with two arms and three arms in a variety of situations. Finally, to illustrate the sample size calculation procedures developed here, we use the data taken from a cluster randomization trial to study the association between the dietary sodium and the blood pressure.  相似文献   

3.
Randomized clinical trials with count measurements as the primary outcome are common in various medical areas such as seizure counts in epilepsy trials, or relapse counts in multiple sclerosis trials. Controlled clinical trials frequently use a conventional parallel-group design that assigns subjects randomly to one of two treatment groups and repeatedly evaluates them at baseline and intervals across a treatment period of a fixed duration. The primary interest is to compare the rates of change between treatment groups. Generalized estimating equations (GEEs) have been widely used to compare rates of change between treatment groups because of its robustness to misspecification of the true correlation structure. In this paper, we derive a sample size formula for comparing the rates of change between two groups in a repeatedly measured count outcome using GEE. The sample size formula incorporates general missing patterns such as independent missing and monotone missing, and general correlation structures such as AR(1) and compound symmetry (CS). The performance of the sample size formula is evaluated through simulation studies. Sample size estimation is illustrated by a clinical trial example from epilepsy.  相似文献   

4.
In clinical trials with a time-to-event endpoint, subjects are often at risk for events other than the one of interest. When the occurrence of one type of event precludes observation of any later events or alters the probably of subsequent events, the situation is one of competing risks. During the planning stage of a clinical trial with competing risks, it is important to take all possible events into account. This paper gives expressions for the power and sample size for competing risks based on a flexible parametric Weibull model. Nonuniform accrual to the study is considered and an allocation ratio other than one may be used. Results are also provided for the case where two or more of the competing risks are of primary interest.  相似文献   

5.
This paper discusses the analysis of interval-censored failure time data, which has recently attracted a great amount of attention (Li and Pu, Lifetime Data Anal 9:57–70, 2003; Sun, The statistical analysis of interval-censored data, 2006; Tian and Cai, Biometrika 93(2):329–342, 2006; Zhang et al., Can J Stat 33:61–70, 2005). Interval-censored data mean that the survival time of interest is observed only to belong to an interval and they occur in many fields including clinical trials, demographical studies, medical follow-up studies, public health studies and tumorgenicity experiments. A major difficulty with the analysis of interval-censored data is that one has to deal with a censoring mechanism that involves two related variables. For the inference, we present a transformation approach that transforms general interval-censored data into current status data, for which one only needs to deal with one censoring variable and the inference is thus much easy. We apply this general idea to regression analysis of interval-censored data using the additive hazards model and numerical studies indicate that the method performs well for practical situations. An illustrative example is provided.  相似文献   

6.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

7.
In many two‐period, two‐treatment (2 × 2) crossover trials, for each subject, a continuous response of interest is measured before and after administration of the assigned treatment within each period. The resulting data are typically used to test a null hypothesis involving the true difference in treatment response means. We show that the power achieved by different statistical approaches is greatly influenced by (i) the ‘structure’ of the variance–covariance matrix of the vector of within‐subject responses and (ii) how the baseline (i.e., pre‐treatment) responses are accounted for in the analysis. For (ii), we compare different approaches including ignoring one or both period baselines, using a common change from baseline analysis (which we advise against), using functions of one or both baselines as period‐specific or period‐invariant covariates, and doing joint modeling of the post‐baseline and baseline responses with corresponding mean constraints for the latter. Based on theoretical arguments and simulation‐based type I error rate and power properties, we recommend an analysis of covariance approach that uses the within‐subject difference in treatment responses as the dependent variable and the corresponding difference in baseline responses as a covariate. Data from three clinical trials are used to illustrate the main points. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
9.
Non-parametric group sequential designs in randomized clinical trials   总被引:1,自引:0,他引:1  
This paper examines some non‐parametric group sequential designs applicable for randomized clinical trials, for comparing two continuous treatment effects taking the observations in matched pairs, or applicable in event‐based analysis. Two inverse binomial sampling schemes are considered, of which the second one is an adaptive data‐dependent design. These designs are compared with some fixed sample size competitors. Power and expected sample sizes are calculated for the proposed procedures.  相似文献   

10.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
The Simon's two‐stage design is the most commonly applied among multi‐stage designs in phase IIA clinical trials. It combines the sample sizes at the two stages in order to minimize either the expected or the maximum sample size. When the uncertainty about pre‐trial beliefs on the expected or desired response rate is high, a Bayesian alternative should be considered since it allows to deal with the entire distribution of the parameter of interest in a more natural way. In this setting, a crucial issue is how to construct a distribution from the available summaries to use as a clinical prior in a Bayesian design. In this work, we explore the Bayesian counterparts of the Simon's two‐stage design based on the predictive version of the single threshold design. This design requires specifying two prior distributions: the analysis prior, which is used to compute the posterior probabilities, and the design prior, which is employed to obtain the prior predictive distribution. While the usual approach is to build beta priors for carrying out a conjugate analysis, we derived both the analysis and the design distributions through linear combinations of B‐splines. The motivating example is the planning of the phase IIA two‐stage trial on anti‐HER2 DNA vaccine in breast cancer, where initial beliefs formed from elicited experts' opinions and historical data showed a high level of uncertainty. In a sample size determination problem, the impact of different priors is evaluated.  相似文献   

12.
In health technology assessment (HTA), beside network meta‐analysis (NMA), indirect comparisons (IC) have become an important tool used to provide evidence between two treatments when no head‐to‐head data are available. Researchers may use the adjusted indirect comparison based on the Bucher method (AIC) or the matching‐adjusted indirect comparison (MAIC). While the Bucher method may provide biased results when included trials differ in baseline characteristics that influence the treatment outcome (treatment effect modifier), this issue may be addressed by applying the MAIC method if individual patient data (IPD) for at least one part of the AIC is available. Here, IPD is reweighted to match baseline characteristics and/or treatment effect modifiers of published data. However, the MAIC method does not provide a solution for situations when several common comparators are available. In these situations, assuming that the indirect comparison via the different common comparators is homogeneous, we propose merging these results by using meta‐analysis methodology to provide a single, potentially more precise, treatment effect estimate. This paper introduces the method to combine several MAIC networks using classic meta‐analysis techniques, it discusses the advantages and limitations of this approach, as well as demonstrates a practical application to combine several (M)AIC networks using data from Phase III psoriasis randomized control trials (RCT).  相似文献   

13.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

14.
In clinical trials with repeated measurements, the responses from each subject are measured multiple times during the study period. Two approaches have been widely used to assess the treatment effect, one that compares the rate of change between two groups and the other that tests the time-averaged difference (TAD). While sample size calculations based on comparing the rate of change between two groups have been reported by many investigators, the literature has paid relatively little attention to the sample size estimation for time-averaged difference (TAD) in the presence of heterogeneous correlation structure and missing data in repeated measurement studies. In this study, we investigate sample size calculation for the comparison of time-averaged responses between treatment groups in clinical trials with longitudinally observed binary outcomes. The generalized estimating equation (GEE) approach is used to derive a closed-form sample size formula, which is flexible enough to account for arbitrary missing patterns and correlation structures. In particular, we demonstrate that the proposed sample size can accommodate a mixture of missing patterns, which is frequently encountered by practitioners in clinical trials. To our knowledge, this is the first study that considers the mixture of missing patterns in sample size calculation. Our simulation shows that the nominal power and type I error are well preserved over a wide range of design parameters. Sample size calculation is illustrated through an example.  相似文献   

15.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

16.
Summary.  Statistical agencies make changes to the data collection methodology of their surveys to improve the quality of the data collected or to improve the efficiency with which they are collected. For reasons of cost it may not be possible to estimate the effect of such a change on survey estimates or response rates reliably, without conducting an experiment that is embedded in the survey which involves enumerating some respondents by using the new method and some under the existing method. Embedded experiments are often designed for repeated and overlapping surveys; however, previous methods use sample data from only one occasion. The paper focuses on estimating the effect of a methodological change on estimates in the case of repeated surveys with overlapping samples from several occasions. Efficient design of an embedded experiment that covers more than one time point is also mentioned. All inference is unbiased over an assumed measurement model, the experimental design and the complex sample design. Other benefits of the approach proposed include the following: it exploits the correlation between the samples on each occasion to improve estimates of treatment effects; treatment effects are allowed to vary over time; it is robust against incorrectly rejecting the null hypothesis of no treatment effect; it allows a wide set of alternative experimental designs. This paper applies the methodology proposed to the Australian Labour Force Survey to measure the effect of replacing pen-and-paper interviewing with computer-assisted interviewing. This application considered alternative experimental designs in terms of their statistical efficiency and their risks to maintaining a consistent series. The approach proposed is significantly more efficient than using only 1 month of sample data in estimation.  相似文献   

17.
In two-stage randomization designs, patients are randomized to one or more available therapies upon entry into the study. Depending on the response to the initial treatment (such as complete remission or shrinkage of tumor), patients are then randomized to receive maintenance treatments to maintain the response or salvage treatment to induce response. One goal of such trials is to compare the combinations of initial and maintenance or salvage therapies in the form of treatment strategies. In cases where the endpoint is defined as overall survival, Lunceford et al. [2002. Estimation of survival distributions of treatment policies in two-stage and randomization designs in clinical trials. Biometrics 58, 48–57] used mean survival time and pointwise survival probability to compare treatment strategies. But, mean survival time or survival probability at a specific time may not be a good summary representative of the overall distribution when the data are skewed or contain influential tail observations. In this article, we propose consistent and asymptotic normal estimators for percentiles of survival curves under various treatment strategies and demonstrate the use of percentiles for comparing treatment strategies. Small sample properties of these estimators are investigated using simulation. We demonstrate our methods by applying them to a leukemia clinical trial data set that motivated this research.  相似文献   

18.
In many relevant situations, such as in medical research, sample sizes may not be previously known. The aim of this paper is to extend one and more than one-way analysis of variance to those situations and show how to compute correct critical values. The interest of this approach lies in avoiding false rejections obtained when using the classical fixed size F-tests. Sample sizes are assumed as random and we then proceed with the application of this approach to a database on cancer.  相似文献   

19.
In terms of the risk of making a Type I error in evaluating a null hypothesis of equality, requiring two independent confirmatory trials with two‐sided p‐values less than 0.05 is equivalent to requiring one confirmatory trial with two‐sided p‐value less than 0.001 25. Furthermore, the use of a single confirmatory trial is gaining acceptability, with discussion in both ICH E9 and a CPMP Points to Consider document. Given the growing acceptance of this approach, this note provides a formula for the sample size savings that are obtained with the single clinical trial approach depending on the levels of Type I and Type II errors chosen. For two replicate trials each powered at 90%, which corresponds to a single larger trial powered at 81%, an approximate 19% reduction in total sample size is achieved with the single trial approach. Alternatively, a single trial with the same sample size as the total sample size from two smaller trials will have much greater power. For example, in the case where two trials are each powered at 90% for two‐sided α=0.05 yielding an overall power of 81%, a single trial using two‐sided α=0.001 25 would have 91% power. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

20.
The tumor burden (TB) process is postulated to be the primary mechanism through which most anticancer treatments provide benefit. In phase II oncology trials, the biologic effects of a therapeutic agent are often analyzed using conventional endpoints for best response, such as objective response rate and progression‐free survival, both of which causes loss of information. On the other hand, graphical methods including spider plot and waterfall plot lack any statistical inference when there is more than one treatment arm. Therefore, longitudinal analysis of TB data is well recognized as a better approach for treatment evaluation. However, longitudinal TB process suffers from informative missingness because of progression or death. We propose to analyze the treatment effect on tumor growth kinetics using a joint modeling framework accounting for the informative missing mechanism. Our approach is illustrated by multisetting simulation studies and an application to a nonsmall‐cell lung cancer data set. The proposed analyses can be performed in early‐phase clinical trials to better characterize treatment effect and thereby inform decision‐making. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号