首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The clinical efficacy of a new treatment may often be better evaluated by two or more co-primary endpoints. Recently, in pharmaceutical drug development, there has been increasing discussion regarding establishing statistically significant favorable results on more than one endpoint in comparisons between treatments, which is referred to as a problem of multiple co-primary endpoints. Several methods have been proposed for calculating the sample size required to design a trial with multiple co-primary correlated endpoints. However, because these methods require users to have considerable mathematical sophistication and knowledge of programming techniques, their application and spread may be restricted in practice. To improve the convenience of these methods, in this paper, we provide a useful formula with accompanying numerical tables for sample size calculations to design clinical trials with two treatments, where the efficacy of a new treatment is demonstrated on continuous co-primary endpoints. In addition, we provide some examples to illustrate the sample size calculations made using the formula. Using the formula and the tables, which can be read according to the patterns of correlations and effect size ratios expected in multiple co-primary endpoints, makes it convenient to evaluate the required sample size promptly.  相似文献   

2.
Assuming that the frequency of occurrence follows the Poisson distribution, we develop sample size calculation procedures for testing equality based on an exact test procedure and an asymptotic test procedure under an AB/BA crossover design. We employ Monte Carlo simulation to demonstrate the use of these sample size formulae and evaluate the accuracy of sample size calculation formula derived from the asymptotic test procedure with respect to power in a variety of situations. We note that when both the relative treatment effect of interest and the underlying intraclass correlation between frequencies within patients are large, the sample size calculation based on the asymptotic test procedure can lose accuracy. In this case, the sample size calculation procedure based on the exact test is recommended. On the other hand, if the relative treatment effect of interest is small, the minimum required number of patients per group will be large, and the asymptotic test procedure will be valid for use. In this case, we may consider use of the sample size calculation formula derived from the asymptotic test procedure to reduce the number of patients needed for the exact test procedure. We include an example regarding a double‐blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these sample size formulae. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
In clinical trials with binary endpoints, the required sample size does not depend only on the specified type I error rate, the desired power and the treatment effect but also on the overall event rate which, however, is usually uncertain. The internal pilot study design has been proposed to overcome this difficulty. Here, nuisance parameters required for sample size calculation are re-estimated during the ongoing trial and the sample size is recalculated accordingly. We performed extensive simulation studies to investigate the characteristics of the internal pilot study design for two-group superiority trials where the treatment effect is captured by the relative risk. As the performance of the sample size recalculation procedure crucially depends on the accuracy of the applied sample size formula, we firstly explored the precision of three approximate sample size formulae proposed in the literature for this situation. It turned out that the unequal variance asymptotic normal formula outperforms the other two, especially in case of unbalanced sample size allocation. Using this formula for sample size recalculation in the internal pilot study design assures that the desired power is achieved even if the overall rate is mis-specified in the planning phase. The maximum inflation of the type I error rate observed for the internal pilot study design is small and lies below the maximum excess that occurred for the fixed sample size design.  相似文献   

4.
Randomized clinical trials with count measurements as the primary outcome are common in various medical areas such as seizure counts in epilepsy trials, or relapse counts in multiple sclerosis trials. Controlled clinical trials frequently use a conventional parallel-group design that assigns subjects randomly to one of two treatment groups and repeatedly evaluates them at baseline and intervals across a treatment period of a fixed duration. The primary interest is to compare the rates of change between treatment groups. Generalized estimating equations (GEEs) have been widely used to compare rates of change between treatment groups because of its robustness to misspecification of the true correlation structure. In this paper, we derive a sample size formula for comparing the rates of change between two groups in a repeatedly measured count outcome using GEE. The sample size formula incorporates general missing patterns such as independent missing and monotone missing, and general correlation structures such as AR(1) and compound symmetry (CS). The performance of the sample size formula is evaluated through simulation studies. Sample size estimation is illustrated by a clinical trial example from epilepsy.  相似文献   

5.
Historical control trials compare an experimental treatment with a previously conducted control treatment. By assigning all recruited samples to the experimental arm, historical control trials can better identify promising treatments in early phase trials compared with randomized control trials. Existing designs of historical control trials with survival endpoints are based on asymptotic normal distribution. However, it remains unclear whether the asymptotic distribution of the test statistic is close enough to the true distribution given relatively small sample sizes in early phase trials. In this article, we address this question by introducing an exact design approach for exponentially distributed survival endpoints, and compare it with an asymptotic design in both real examples and simulation examples. Simulation results show that the asymptotic test could lead to bias in the sample size estimation. We conclude the proposed exact design should be used in the design of historical control trials.  相似文献   

6.
Sample size calculation is a critical issue in clinical trials because a small sample size leads to a biased inference and a large sample size increases the cost. With the development of advanced medical technology, some patients can be cured of certain chronic diseases, and the proportional hazards mixture cure model has been developed to handle survival data with potential cure information. Given the needs of survival trials with potential cure proportions, a corresponding sample size formula based on the log-rank test statistic for binary covariates has been proposed by Wang et al. [25]. However, a sample size formula based on continuous variables has not been developed. Herein, we presented sample size and power calculations for the mixture cure model with continuous variables based on the log-rank method and further modified it by Ewell's method. The proposed approaches were evaluated using simulation studies for synthetic data from exponential and Weibull distributions. A program for calculating necessary sample size for continuous covariates in a mixture cure model was implemented in R.  相似文献   

7.
In stratified otolaryngologic (or ophthalmologic) studies, the misleading results may be obtained when ignoring the confounding effect and the correlation between responses from two ears. Score statistic and Wald-type statistic are presented to test equality in a stratified bilateral-sample design, and their corresponding sample size formulae are given. Score statistic for testing homogeneity of difference between two proportions and score confidence interval of the common difference of two proportions in a stratified bilateral-sample design are derived. Empirical results show that (1) score statistic and Wald-type statistic based on dependence model assumption outperform other statistics in terms of the type I error rates; (2) score confidence interval demonstrates reasonably good coverage property; (3) sample size formula via Wald-type statistic under dependence model assumption is rather accurate. A real example is used to illustrate the proposed methodologies.  相似文献   

8.
When counting the number of chemical parts in air pollution studies or when comparing the occurrence of congenital malformations between a uranium mining town and a control population, we often assume Poisson distribution for the number of these rare events. Some discussions on sample size calculation under Poisson model appear elsewhere, but all these focus on the case of testing equality rather than testing equivalence. We discuss sample size and power calculation on the basis of exact distribution under Poisson models for testing non-inferiority and equivalence with respect to the mean incidence rate ratio. On the basis of large sample theory, we further develop an approximate sample size calculation formula using the normal approximation of a proposed test statistic for testing non-inferiority and an approximate power calculation formula for testing equivalence. We find that using these approximation formulae tends to produce an underestimate of the minimum required sample size calculated from using the exact test procedure. On the other hand, we find that the power corresponding to the approximate sample sizes can be actually accurate (with respect to Type I error and power) when we apply the asymptotic test procedure based on the normal distribution. We tabulate in a variety of situations the minimum mean incidence needed in the standard (or the control) population, that can easily be employed to calculate the minimum required sample size from each comparison group for testing non-inferiority and equivalence between two Poisson populations.  相似文献   

9.
In pharmaceutical‐related research, we usually use clinical trials methods to identify valuable treatments and compare their efficacy with that of a standard control therapy. Although clinical trials are essential for ensuring the efficacy and postmarketing safety of a drug, conducting clinical trials is usually costly and time‐consuming. Moreover, to allocate patients to the little therapeutic effect treatments is inappropriate due to the ethical and cost imperative. Hence, there are several 2‐stage designs in the literature where, for reducing cost and shortening duration of trials, they use the conditional power obtained from interim analysis results to appraise whether we should continue the lower efficacious treatments in the next stage. However, there is a lack of discussion about the influential impacts on the conditional power of a trial at the design stage in the literature. In this article, we calculate the optimal conditional power via the receiver operating characteristic curve method to assess the impacts on the quality of a 2‐stage design with multiple treatments and propose an optimal design using the minimum expected sample size for choosing the best or promising treatment(s) among several treatments under an optimal conditional power constraint. In this paper, we provide tables of the 2‐stage design subject to optimal conditional power for various combinations of design parameters and use an example to illustrate our methods.  相似文献   

10.
The purpose of our study is to propose a. procedure for determining the sample size at each stage of the repeated group significance, tests intended to compare the efficacy of two treatments when a response variable is normal. It is necessary to devise a procedure for reducing the maximum sample size because a large number of sample size are often used in group sequential test. In order to reduce the sample size at each stage, we construct the repeated confidence boundaries which enable us to find which of the two treatments is the more effective at an early stage. Thus we use the recursive formulae of numerical integrations to determine the sample size at the intermediate stage. We compare our procedure with Pocock's in terms of maximum sample size and average sample size in the simulations.  相似文献   

11.
The current practice of designing single‐arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single‐arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single‐arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
In clinical trials with repeated measurements, the responses from each subject are measured multiple times during the study period. Two approaches have been widely used to assess the treatment effect, one that compares the rate of change between two groups and the other that tests the time-averaged difference (TAD). While sample size calculations based on comparing the rate of change between two groups have been reported by many investigators, the literature has paid relatively little attention to the sample size estimation for time-averaged difference (TAD) in the presence of heterogeneous correlation structure and missing data in repeated measurement studies. In this study, we investigate sample size calculation for the comparison of time-averaged responses between treatment groups in clinical trials with longitudinally observed binary outcomes. The generalized estimating equation (GEE) approach is used to derive a closed-form sample size formula, which is flexible enough to account for arbitrary missing patterns and correlation structures. In particular, we demonstrate that the proposed sample size can accommodate a mixture of missing patterns, which is frequently encountered by practitioners in clinical trials. To our knowledge, this is the first study that considers the mixture of missing patterns in sample size calculation. Our simulation shows that the nominal power and type I error are well preserved over a wide range of design parameters. Sample size calculation is illustrated through an example.  相似文献   

13.
In the traditional study design of a single‐arm phase II cancer clinical trial, the one‐sample log‐rank test has been frequently used. A common practice in sample size calculation is to assume that the event time in the new treatment follows exponential distribution. Such a study design may not be suitable for immunotherapy cancer trials, when both long‐term survivors (or even cured patients from the disease) and delayed treatment effect are present, because exponential distribution is not appropriate to describe such data and consequently could lead to severely underpowered trial. In this research, we proposed a piecewise proportional hazards cure rate model with random delayed treatment effect to design single‐arm phase II immunotherapy cancer trials. To improve test power, we proposed a new weighted one‐sample log‐rank test and provided a sample size calculation formula for designing trials. Our simulation study showed that the proposed log‐rank test performs well and is robust of misspecified weight and the sample size calculation formula also performs well.  相似文献   

14.
Multi-arm trials are an efficient way of simultaneously testing several experimental treatments against a shared control group. As well as reducing the sample size required compared to running each trial separately, they have important administrative and logistical advantages. There has been debate over whether multi-arm trials should correct for the fact that multiple null hypotheses are tested within the same experiment. Previous opinions have ranged from no correction is required, to a stringent correction (controlling the probability of making at least one type I error) being needed, with regulators arguing the latter for confirmatory settings. In this article, we propose that controlling the false-discovery rate (FDR) is a suitable compromise, with an appealing interpretation in multi-arm clinical trials. We investigate the properties of the different correction methods in terms of the positive and negative predictive value (respectively how confident we are that a recommended treatment is effective and that a non-recommended treatment is ineffective). The number of arms and proportion of treatments that are truly effective is varied. Controlling the FDR provides good properties. It retains the high positive predictive value of FWER correction in situations where a low proportion of treatments is effective. It also has a good negative predictive value in situations where a high proportion of treatments is effective. In a multi-arm trial testing distinct treatment arms, we recommend that sponsors and trialists consider use of the FDR.  相似文献   

15.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

16.
This paper is concerned with asymptotic distributions of functions of a sample covariance matrix under the elliptical model. Simple but useful formulae for calculating asymptotic variances and covariances of the functions are derived. Also, an asymptotic expansion formula for the expectation of a function of a sample covariance matrix is derived; it is given up to the second-order term with respect to the inverse of the sample size. Two examples are given: one of calculating the asymptotic variances and covariances of the stepdown multiple correlation coefficients, and the other of obtaining the asymptotic expansion formula for the moments of sample generalized variance.  相似文献   

17.
Sample size determination is essential during the planning phases of clinical trials. To calculate the required sample size for paired right-censored data, the structure of the within-paired correlations needs to be pre-specified. In this article, we consider using popular parametric copula models, including the Clayton, Gumbel, or Frank families, to model the distribution of joint survival times. Under each copula model, we derive a sample size formula based on the testing framework for rank-based tests and non-rank-based tests (i.e., logrank test and Kaplan–Meier statistic, respectively). We also investigate how the power or the sample size was affected by the choice of testing methods and copula model under different alternative hypotheses. In addition to this, we examine the impacts of paired-correlations, accrual times, follow-up times, and the loss to follow-up rates on sample size estimation. Finally, two real-world studies are used to illustrate our method and R code is available to the user.  相似文献   

18.
In placebo‐controlled, double‐blinded, randomized clinical trials, the presence of placebo responders reduces the effect size for comparison of the active drug group with the placebo group. An attempt to resolve this problem is to use the sequential parallel comparison design (SPCD). Although there are SPCDs with dichotomous or continuous outcomes, an SPCD with negative binomial outcomes—with which investigators deal eg, in clinical trials involving multiple sclerosis, where the investigators are still concerned about the presence of placebo responders—has not yet been discussed. In this article, we propose a simple test for the treatment effect in clinical trials with an SPCD and negative binomial outcomes. Through simulations, we show that the analysis method achieves the nominal type I error rate and power, whereas the sample size calculation provides the sample size with adequate power accuracy.  相似文献   

19.
Multiple-arm dose-response superiority trials are widely studied for continuous and binary endpoints, while non-inferiority designs have been studied recently in two-arm trials. In this paper, a unified asymptotic formulation of a sample size calculation for k-arm (k>0) trials with different endpoints (continuous, binary and survival endpoints) is derived for both superiority and non-inferiority designs. The proposed method covers the sample size calculation for single-arm and k-arm (k> or =2) designs with survival endpoints, which has not been covered in the statistic literature. A simple, closed form for power and sample size calculations is derived from a contrast test. Application examples are provided. The effect of the contrasts on the power is discussed, and a SAS program for sample size calculation is provided and ready to use.  相似文献   

20.
To shorten the drug lag or the time lag for approval, simultaneous drug development, submission, and approval in the world may be desirable. Recently, multi-regional trials have attracted much attention from sponsors as well as regulatory authorities. Current methods for sample determination are based on the assumption that true treatment effect is uniform across regions. However, unrecognized heterogeneity among patients as ethnic or genetic factor will effect patients’ survival. In this article, we address the issue that the treatment effects with unrecognized heterogeneity that interacts with treatment are among regions to design a multi-regional trial. The log-rank test is employed to deal with the heterogeneous effect size among regions. The test statistic for the overall treatment effect is used to determine the total sample size for a multi-regional trial and the consistent trend is used to rationalize partition for sample size to each region.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号