首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sample size estimation for comparing the rates of change in two-arm repeated measurements has been investigated by many investigators. In contrast, the literature has paid relatively less attention to sample size estimation for studies with multi-arm repeated measurements where the design and data analysis can be more complex than two-arm trials. For continuous outcomes, Jung and Ahn (2004 Jung, S., Ahn, C. (2004). K-sample test and sample size calculation for comparing slopes in data with repeated measurements. Biometrical J. 46(5):554564.[Crossref], [Web of Science ®] [Google Scholar]) and Zhang and Ahn (2013 Zhang, S., Ahn, C. (2013). Sample size calculation for comparing time-averaged responses in k-group repeated measurement studies. Comput. Stat. Data Anal. 58:283291.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) have presented sample size formulas to compare the rates of change and time-averaged responses in multi-arm trials, using the generalized estimating equation (GEE) approach. To our knowledge, there has been no corresponding development for multi-arm trials with count outcomes. We present a sample size formula for comparing the rates of change in multi-arm repeated count outcomes using the GEE approach that accommodates various correlation structures, missing data patterns, and unbalanced designs. We conduct simulation studies to assess the performance of the proposed sample size formula under a wide range of designing configurations. Simulation results suggest that empirical type I error and power are maintained close to their nominal levels. The proposed method is illustrated using an epileptic clinical trial example.  相似文献   

2.
In clinical trials with repeated measurements, the responses from each subject are measured multiple times during the study period. Two approaches have been widely used to assess the treatment effect, one that compares the rate of change between two groups and the other that tests the time-averaged difference (TAD). While sample size calculations based on comparing the rate of change between two groups have been reported by many investigators, the literature has paid relatively little attention to the sample size estimation for time-averaged difference (TAD) in the presence of heterogeneous correlation structure and missing data in repeated measurement studies. In this study, we investigate sample size calculation for the comparison of time-averaged responses between treatment groups in clinical trials with longitudinally observed binary outcomes. The generalized estimating equation (GEE) approach is used to derive a closed-form sample size formula, which is flexible enough to account for arbitrary missing patterns and correlation structures. In particular, we demonstrate that the proposed sample size can accommodate a mixture of missing patterns, which is frequently encountered by practitioners in clinical trials. To our knowledge, this is the first study that considers the mixture of missing patterns in sample size calculation. Our simulation shows that the nominal power and type I error are well preserved over a wide range of design parameters. Sample size calculation is illustrated through an example.  相似文献   

3.
We consider the problem of sample size calculation for non-inferiority based on the hazard ratio in time-to-event trials where overall study duration is fixed and subject enrollment is staggered with variable follow-up. An adaptation of previously developed formulae for the superiority framework is presented that specifically allows for effect reversal under the non-inferiority setting, and its consequent effect on variance. Empirical performance is assessed through a small simulation study, and an example based on an ongoing trial is presented. The formulae are straightforward to program and may prove a useful tool in planning trials of this type.  相似文献   

4.
Abstract

In many cluster randomization studies, cluster sizes are not fixed and may be highly variable. For those studies, sample size estimation assuming a constant cluster size may lead to under-powered studies. Sample size formulas have been developed to incorporate the variability in cluster size for clinical trials with continuous and binary outcomes. Count outcomes frequently occur in cluster randomized studies. In this paper, we derive a closed-form sample size formula for count outcomes accounting for the variability in cluster size. We compare the performance of the proposed method with the average cluster size method through simulation. The simulation study shows that the proposed method has a better performance with empirical powers and type I errors closer to the nominal levels.  相似文献   

5.
This paper provides closed form expressions for the sample size for two-level factorial experiments when the response is the number of defectives. The sample sizes are obtained by approximating the two-sided test for no effect through tests for the mean of a normal distribution, and borrowing the classical sample size solution for that problem. The proposals are appraised relative to the exact sample sizes computed numerically, without appealing to any approximation to the binomial distribution, and the use of the sample size tables provided is illustrated through an example.  相似文献   

6.
The poor performance of the Wald method for constructing confidence intervals (CIs) for a binomial proportion has been demonstrated in a vast literature. The related problem of sample size determination needs to be updated and comparative studies are essential to understanding the performance of alternative methods. In this paper, the sample size is obtained for the Clopper–Pearson, Bayesian (Uniform and Jeffreys priors), Wilson, Agresti–Coull, Anscombe, and Wald methods. Two two-step procedures are used: one based on the expected length (EL) of the CI and another one on its first-order approximation. In the first step, all possible solutions that satisfy the optimal criterion are obtained. In the second step, a single solution is proposed according to a new criterion (e.g. highest coverage probability (CP)). In practice, it is expected a sample size reduction, therefore, we explore the behavior of the methods admitting 30% and 50% of losses. For all the methods, the ELs are inflated, as expected, but the coverage probabilities remain close to the original target (with few exceptions). It is not easy to suggest a method that is optimal throughout the range (0, 1) for p. Depending on whether the goal is to achieve CP approximately or above the nominal level different recommendations are made.  相似文献   

7.
Monitoring clinical trials in nonfatal diseases where ethical considerations do not dictate early termination upon demonstration of efficacy often requires examining the interim findings to assure that the protocol-specified sample size will provide sufficient power against the null hypothesis when the alternative hypothesis is true. The sample size may be increased, if necessary to assure adequate power. This paper presents a new method for carrying out such interim power evaluations for observations from normal distributions without unblinding the treatment assignments or discernably affecting the Type 1 error rate. Simulation studies confirm the expected performance of the method.  相似文献   

8.
When designing a clinical trial an appropriate justification for the sample size should be provided in the protocol. However, there are a number of settings when undertaking a pilot trial when there is no prior information to base a sample size on. For such pilot studies the recommendation is a sample size of 12 per group. The justifications for this sample size are based on rationale about feasibility; precision about the mean and variance; and regulatory considerations. The context of the justifications are that future studies will use the information from the pilot in their design. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
The problem of confidence estimation of a normal mean vector when data on different subsets of response variables are missing is considered. A simple approximate confidence region is proposed when the data matrix is of monotone pattern. Simultaneous inferential procedures based on Scheffe's method and Bonferroni's method are outlined. Further, applications of the results to a repeated measurements model are given. The results are illustrated using a practical example.  相似文献   

10.
Determination of an adequate sample size is critical to the design of research ventures. For clustered right-censored data, Manatunga and Chen [Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes. Biometrics. 2000;56(2):616–621] proposed a sample size calculation based on considering the bivariate marginal distribution as Clayton copula model. In addition to the Clayton copula, other important family of copulas, such as Gumbel and Frank copulas are also well established in multivariate survival analysis. However, sample size calculation based on these assumptions has not been fully investigated yet. To broaden the scope of Manatunga and Chen [Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes. Biometrics. 2000;56(2):616–621]'s research and achieve a more flexible sample size calculation for clustered right-censored data, we extended the work by assuming the marginal distribution as bivariate Gumbel and Frank copulas. We evaluate the performance of the proposed method and investigate the impacts of the accrual times, follow-up times and the within-clustered correlation effect of the study. The proposed method is applied to two real-world studies, and the R code is made available to users.  相似文献   

11.
In this article, small area estimation under a multivariate linear model for repeated measures data is considered. The proposed model aims to get a model which borrows strength both across small areas and over time. The model accounts for repeated surveys, grouped response units, and random effects variations. Estimation of model parameters is discussed within a likelihood based approach. Prediction of random effects, small area means across time points, and per group units are derived. A parametric bootstrap method is proposed for estimating the mean squared error of the predicted small area means. Results are supported by a simulation study.  相似文献   

12.
In clinical trials with survival data, investigators may wish to re-estimate the sample size based on the observed effect size while the trial is ongoing. Besides the inflation of the type-I error rate due to sample size re-estimation, the method for calculating the sample size in an interim analysis should be carefully considered because the data in each stage are mutually dependent in trials with survival data. Although the interim hazard estimate is commonly used to re-estimate the sample size, the estimate can sometimes be considerably higher or lower than the hypothesized hazard by chance. We propose an interim hazard ratio estimate that can be used to re-estimate the sample size under those circumstances. The proposed method was demonstrated through a simulation study and an actual clinical trial as an example. The effect of the shape parameter for the Weibull survival distribution on the sample size re-estimation is presented.  相似文献   

13.
Abstract: The authors address the problem of estimating an inter‐event distribution on the basis of count data. They derive a nonparametric maximum likelihood estimate of the inter‐event distribution utilizing the EM algorithm both in the case of an ordinary renewal process and in the case of an equilibrium renewal process. In the latter case, the iterative estimation procedure follows the basic scheme proposed by Vardi for estimating an inter‐event distribution on the basis of time‐interval data; it combines the outputs of the E‐step corresponding to the inter‐event distribution and to the length‐biased distribution. The authors also investigate a penalized likelihood approach to provide the proposed estimation procedure with regularization capabilities. They evaluate the practical estimation procedure using simulated count data and apply it to real count data representing the elongation of coffee‐tree leafy axes.  相似文献   

14.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

15.
ABSTRACT

In the past, a tolerance interval was used for the statistical quality control process on raw materials and/or the final product. In the traditional concept of the tolerance interval, the variance from the measurements is a single component. However, we can find examples about several components that could vary in their measurements, so an approximate method must be found to modify the traditional tolerance interval. Now we employ a tolerance interval considering multiple components in the variance from the measurements to deal with quality control process. In our paper, the proposed method is used to solve the sample size determination for a two-sided tolerance interval approach considering multiple components on the variance of measurements.  相似文献   

16.
A stochastic model is proposed to analyze the observation vectors of variable lengths in a long-term clinical trial. Using a Markovian normal density, the likelihood ratio tests for usual hypotheses are derived and asymptotic distributions of the test statistics are obtained. The use of 'step-down' procedure is discussed for the interim analysis and a numerical example is given to illustrate the methodology.  相似文献   

17.
For binary endpoints, the required sample size depends not only on the known values of significance level, power and clinically relevant difference but also on the overall event rate. However, the overall event rate may vary considerably between studies and, as a consequence, the assumptions made in the planning phase on this nuisance parameter are to a great extent uncertain. The internal pilot study design is an appealing strategy to deal with this problem. Here, the overall event probability is estimated during the ongoing trial based on the pooled data of both treatment groups and, if necessary, the sample size is adjusted accordingly. From a regulatory viewpoint, besides preserving blindness it is required that eventual consequences for the Type I error rate should be explained. We present analytical computations of the actual Type I error rate for the internal pilot study design with binary endpoints and compare them with the actual level of the chi‐square test for the fixed sample size design. A method is given that permits control of the specified significance level for the chi‐square test under blinded sample size recalculation. Furthermore, the properties of the procedure with respect to power and expected sample size are assessed. Throughout the paper, both the situation of equal sample size per group and unequal allocation ratio are considered. The method is illustrated with application to a clinical trial in depression. Copyright © 2004 John Wiley & Sons Ltd.  相似文献   

18.
This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates.  相似文献   

19.
In this article, we conduct a Monte Carlo study to examine four balancing scores (BS1: propensity score, BS2: prognostic score, BS3: adjusted propensity score estimated by the estimated prognostic score, and BS4: adjusted propensity score estimated by the estimated prognostic score and other covariates) for adjusting bias in estimating the marginal and the conditional rate ratios of count data in observational studies. Simulation results show that BS1–BS4 are not much different in terms of estimating the marginal and the conditional rate ratios, however, choosing the appropriate matching algorithm is more important than selecting a balancing scores.  相似文献   

20.
Data collected in various scientific fields are count data. One way to analyze such data is to compare the individual levels of the factor treatment using multiple comparisons. However, the measured individuals are often clustered – e.g. according to litter or rearing. This must be considered when estimating the parameters by a repeated measurement model. In addition, ignoring the overdispersion to which count data is prone leads to an increase of the type one error rate. We carry out simulation studies using several different data settings and compare different multiple contrast tests with parameter estimates from generalized estimation equations and generalized linear mixed models in order to observe coverage and rejection probabilities. We generate overdispersed, clustered count data in small samples as can be observed in many biological settings. We have found that the generalized estimation equations outperform generalized linear mixed models if the variance-sandwich estimator is correctly specified. Furthermore, generalized linear mixed models show problems with the convergence rate under certain data settings, but there are model implementations with lower implications exists. Finally, we use an example of genetic data to demonstrate the application of the multiple contrast test and the problems of ignoring strong overdispersion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号