共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
Kelley M. Kidwell Nicholas J. Seewald Qui Tran Connie Kasari Daniel Almirall 《Journal of applied statistics》2018,45(9):1628-1651
In behavioral, educational and medical practice, interventions are often personalized over time using strategies that are based on individual behaviors and characteristics and changes in symptoms, severity, or adherence that are a result of one's treatment. Such strategies that more closely mimic real practice, are known as dynamic treatment regimens (DTRs). A sequential multiple assignment randomized trial (SMART) is a multi-stage trial design that can be used to construct effective DTRs. This article reviews a simple to use ‘weighted and replicated’ estimation technique for comparing DTRs embedded in a SMART design using logistic regression for a binary, end-of-study outcome variable. Based on a Wald test that compares two embedded DTRs of interest from the ‘weighted and replicated’ regression model, a sample size calculation is presented with a corresponding user-friendly applet to aid in the process of designing a SMART. The analytic models and sample size calculations are presented for three of the more commonly used two-stage SMART designs. Simulations for the sample size calculation show the empirical power reaches expected levels. A data analysis example with corresponding code is presented in the appendix using data from a SMART developing an effective DTR in autism. 相似文献
4.
In this paper, we propose a design that uses a short‐term endpoint for accelerated approval at interim analysis and a long‐term endpoint for full approval at final analysis with sample size adaptation based on the long‐term endpoint. Two sample size adaptation rules are compared: an adaptation rule to maintain the conditional power at a prespecified level and a step function type adaptation rule to better address the bias issue. Three testing procedures are proposed: alpha splitting between the two endpoints; alpha exhaustive between the endpoints; and alpha exhaustive with improved critical value based on correlation. Family‐wise error rate is proved to be strongly controlled for the two endpoints, sample size adaptation, and two analysis time points with the proposed designs. We show that using alpha exhaustive designs greatly improve the power when both endpoints are effective, and the power difference between the two adaptation rules is minimal. The proposed design can be extended to more general settings. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
5.
John Lawrence 《Pharmaceutical statistics》2002,1(2):97-105
Since the treatment effect of an experimental drug is generally not known at the onset of a clinical trial, it may be wise to allow for an adjustment in the sample size after an interim analysis of the unblinded data. Using a particular adaptive test statistic, a procedure is demonstrated for finding the optimal design. Both the timing of the interim analysis and the way the sample size is adjusted can influence the power of the resulting procedure. It is possible to have smaller average sample size using the adaptive test statistic, even if the initial estimate of the treatment effect is wrong, compared to the sample size needed using a standard test statistic without an interim look and assuming a correct initial estimate of the effect. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
6.
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility‐based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time‐to‐event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no‐go decision rules are provided for both the “all‐or‐none” and “at‐least‐one” win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology. 相似文献
7.
Multivariate techniques of O'Brien's OLS and GLS statistics are discussed in the context of their application in clinical trials. We introduce the concept of an operational effect size and illustrate its use to evaluate power. An extension describing how to handle covariates and missing data is developed in the context of Mixed models. This extension allowing adjustment for covariates is easily programmed in any statistical package including SAS. Monte Carlo simulation is used for a number of different sample sizes to compare the actual size and power of the tests based on O'Brien's OLS and GLS statistics. 相似文献
8.
9.
Matthew Somerville Timothy Wilson Gary Koch Peter Westfall 《Pharmaceutical statistics》2005,4(1):7-13
We consider the problem of accounting for multiplicity for two correlated endpoints in the comparison of two treatments using weighted hypothesis tests. Various weighted testing procedures are reviewed, and a more powerful method (a variant of the weighted Simes test) is evaluated for the general bivariate normal case and for a particular clinical trial example. Results from these evaluations are summarized and indicate that the weighted methods perform in a manner similar to unweighted methods. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
10.
In Clinical trials involving multiple comparisons of interest, the importance of controlling the trial Type I error is well-understood and well-documented. Moreover, when these comparisons are themselves correlated, methodologies exist for accounting for the correlation in the trial design, when calculating the trial significance levels. However, less well-documented is the fact that there are some circumstances where multiple comparisons affect the Type II error rather than the Type I error, and failure to account for this, can result in a reduction in the overall trial power. In this paper, we describe sample size calculations for clinical trials involving multiple correlated comparisons, where all the comparisons must be statistically significant for the trial to provide evidence of effect, and show how such calculations have to account for multiplicity in the Type II error. For the situation of two comparisons, we provide a result which assumes a bivariate Normal distribution. For the general case of two or more comparisons we provide a solution using inflation factors to increase the sample size relative to the case of a single outcome. We begin with a simple case of two comparisons assuming a bivariate Normal distribution, show how to factor in correlation between comparisons and then generalise our findings to situations with two or more comparisons. These methods are easy to apply, and we demonstrate how accounting for the multiplicity in the Type II error leads, at most, to modest increases in the sample size. 相似文献
11.
《Journal of Statistical Computation and Simulation》2012,82(3-4):173-185
The power assessment of tests of the equality of k normal means such as the k treatment means in a one-way fixed effects analysis of variance model is addressed. Power assessment is considered in terms of a constraint on the range of the treatment means. The power properties of the standard F-test and Studentised range test are compared with those of an optimal (minimax) test procedure, which is known to maximise power levels under this constraint. It is shown that the standard test procedures compare well with the optimal test procedure, and in particular, the Studentised range test is shown to be practically as good as optimal in this setting. 相似文献
12.
Guido Giani 《统计学通讯:理论与方法》2013,42(10):3163-3171
The problem of selecting s out of k given compounts which contains at least c of the t best ones is considered. In the case of underlying distribution families with location or scale parameter it is shown that the indiffence zone approach can be strengthened to confidence statements for the parameters of the selected components. These confidence statements are valid over the entire parameter space without decreasing the infimum of the probability of a correct selection. 相似文献
13.
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre‐specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre‐specifying multiple test statistics and relying on the minimum p‐value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p‐value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p‐value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p‐value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
14.
Two recursive schemes are presented for the calculation of the probabilityP(g(x)≤S n (x)≤h(x) for allx∈®), whereS n is the empirical distribution function of a sample from a continuous distribution andh, g are continuous and isotone functions. The results are specialized for the calculation of the distribution and the corresponding percentage points of the test statistic of the two-sided Kolmogorov-Smirnov one sample test. The schemes allow the calculation of the power of the test too. Finally an extensive tabulation of percentage points for the Kolmogorov-Smirnov test is given. 相似文献
15.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect. 相似文献
16.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size. 相似文献
17.
A study design with two or more doses of a test drug and placebo is frequently used in clinical drug development. Multiplicity issues arise when there are multiple comparisons between doses of test drug and placebo, and also when there are comparisons of doses with one another. An appropriate analysis strategy needs to be specified in advance to avoid spurious results through insufficient control of Type I error, as well as to avoid the loss of power due to excessively conservative adjustments for multiplicity. For evaluation of alternative strategies with possibly complex management of multiplicity, we compare the performance of several testing procedures through the simulated data that represent various patterns of treatment differences. The purpose is to identify which methods perform better or more robustly than the others and under what conditions. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
18.
A sample size justification is a vital part of any investigation. However, estimating the number of participants required to give meaningful results is not always straightforward. A number of components are required to facilitate a suitable sample size calculation. In this paper, the steps for conducting sample size calculations for superiority trials are summarised. Practical advice and examples are provided illustrating how to carry out the calculations by hand and using the app SampSize. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
19.
A sample size selection procedure for paired comparisons of means is presented which controls the half width of the confidence intervals while allowing for unequal variances of treatment means. 相似文献
20.
Generalized optimal design for two‐arm,randomized phase II clinical trials with endpoints from the exponential dispersion family 下载免费PDF全文
Wei Jiang Jonathan D. Mahnken Jianghua He Matthew S. Mayo 《Pharmaceutical statistics》2016,15(6):459-470
For two‐arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre‐specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献