首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 479 毫秒
1.
An estimated sample size is a function of three components: the required power, the predetermined Type I error rate, and the specified effect size. For Normal data the standardized effect size is taken as the difference between two means divided by an estimate of the population standard deviation. However, in early phase trials one may not have a good estimate of the population variance as it is often based on the results of a few relatively small trials. The imprecision of this estimate should be taken into account in sample size calculations. When estimating a trial sample size this paper recommends that one should investigate the sensitivity of the trial to the assumptions made about the variance and consider being adaptive in one's trial design. Copyright © 2004 John Wiley & Sons Ltd.  相似文献   

2.
The delete-a-group jackknife is sometimes used when estimating the variances of statistics based on a large sample. We investigate heavily poststratified estimators for a population mean and a simple regression coefficient, where both full-sample and domain estimates are of interest. The delete-a-group (DAG) jackknife employing 30, 60, and 100 replicates is found to be highly unstable, even for large sample sizes. The empirical degrees of freedom of these DAG jackknives are usually much less than their nominal degrees of freedom. This analysis calls into question whether coverage intervals derived from replication-based variance estimators can be trusted for highly calibrated estimates.  相似文献   

3.
In this paper, an exact variance of the one‐sample log‐rank test statistic is derived under the alternative hypothesis, and a sample size formula is proposed based on the derived exact variance. Simulation results showed that the proposed sample size formula provides adequate power to design a study to compare the survival of a single sample with that of a standard population. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
The authors discuss the bias of the estimate of the variance of the overall effect synthesized from individual studies by using the variance weighted method. This bias is proven to be negative. Furthermore, the conditions, the likelihood of underestimation and the bias from this conventional estimate are studied based on the assumption that the estimates of the effect are subject to normal distribution with common mean. The likelihood of underestimation is very high (e.g. it is greater than 85% when the sample sizes in two combined studies are less than 120). The alternative less biased estimates for the cases with and without the homogeneity of the variances are given in order to adjust for the sample size and the variation of the population variance. In addition, the sample size weight method is suggested if the consistence of the sample variances is violated Finally, a real example is presented to show the difference by using the above three estimate methods.  相似文献   

5.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   

6.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

7.
On the use of corrections for overdispersion   总被引:3,自引:0,他引:3  
In studying fluctuations in the size of a blackgrouse ( Tetrao tetrix ) population, an autoregressive model using climatic conditions appears to follow the change quite well. However, the deviance of the model is considerably larger than its number of degrees of freedom. A widely used statistical rule of thumb holds that overdispersion is present in such situations, but model selection based on a direct likelihood approach can produce opposing results. Two further examples, of binomial and of Poisson data, have models with deviances that are almost twice the degrees of freedom and yet various overdispersion models do not fit better than the standard model for independent data. This can arise because the rule of thumb only considers a point estimate of dispersion, without regard for any measure of its precision. A reasonable criterion for detecting overdispersion is that the deviance be at least twice the number of degrees of freedom, the familiar Akaike information criterion, but the actual presence of overdispersion should then be checked by some appropriate modelling procedure.  相似文献   

8.
We consider the adjustment, based upon a sample of size n, of collections of vectors drawn from either an infinite or finite population. The vectors may be judged to be either normally distributed or, more generally, second-order exchangeable. We develop the work of Goldstein and Wooff (1998) to show how the familiar univariate finite population corrections (FPCs) naturally generalise to individual quantities in the multivariate population. The types of information we gain by sampling are identified with the orthogonal canonical variable directions derived from a generalised eigenvalue problem. These canonical directions share the same co-ordinate representation for all sample sizes and, for equally defined individuals, all population sizes enabling simple comparisons between both the effects of different sample sizes and of different population sizes. We conclude by considering how the FPC is modified for multivariate cluster sampling with exchangeable clusters. In univariate two-stage cluster sampling, we may decompose the variance of the population mean into the sum of the variance of cluster means and the variance of the cluster members within clusters. The first term has a FPC relating to the sampling fraction of clusters, the second term has a FPC relating to the sampling fraction of cluster size. We illustrate how this generalises in the multivariate case. We decompose the variance into two terms: the first relating to multivariate finite population sampling of clusters and the second to multivariate finite population sampling within clusters. We solve two generalised eigenvalue problems to show how to generalise the univariate to the multivariate: each of the two FPCs attaches to one, and only one, of the two eigenbases.  相似文献   

9.
Binomial trial sample size specification depends upon the values of the unknown response rate parameters, as well as upon the size and power of the resulting test. In practice, the values assumed for these parameters are based upon the results of previous or pilot trials, or upon the investigator's prior knowledge or belief. In either case, there is some uncertainty associated with these values that should be taken into account if the sample sizes are to be specified realistically. This paper describes a procedure for incorporating this uncertainty explicitly into the sample size determination on the basis of joint confidence distributions obtained from the pilot or prior information.  相似文献   

10.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation‐based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.  相似文献   

12.
A simulation study was conducted to assess how well the necessary sample size to achieve a stipulated margin of error can be estimated prior to sampling. Our concern was particularly focused on performance when sampling from a very skewed distribution, which is a common feature of many biological, economic, and other populations. We examined two approaches for estimating sample size—one being the commonly used strategy aimed at regulating the average magnitude of the stipulated margin of error and the second being a previously proposed strategy to control the tolerance probability with which the stipulated margin of error is exceeded. Results of the simulation revealed that (1) skewness does not much affect the average estimated sample size but can greatly extend the range of estimated sample sizes; and (2) skewness does reduce the effectiveness of Kupper and Hafner's sample size estimator, yet its effectiveness is negatively impacted less by skewness directly, and to a much greater degree by the common practice of estimating the population variance via a pilot sampling from the skewed population. Nonetheless, the simulations suggest that estimating sample size to control the probability with which the desired margin of error is achieved is a worthwhile alternative to the usual sample size formula that controls the average width of the confidence interval only.  相似文献   

13.
Sample size determination is one of the most commonly encountered tasks in the design of every applied research. The general guideline suggests that a pilot study can offer plausible planning values for the vital model characteristics. This article examines two viable approaches to taking into account the imprecision of a variance estimate in sample size calculations for linear statistical models. The multiplier procedure employs an adjusted sample variance in the form of a multiple of the observed sample variance. The Bayesian method accommodates the uncertainty of a sample variance through a prior distribution. It is shown that the two seemingly distinct techniques are equivalent for sample size determination under the designated assurance requirements that the actual power exceeds the planned threshold with a given tolerance probability, or the expected power attains the desired level. The selection of optimum pilot sample size for minimizing the expected total cost is also considered.  相似文献   

14.
To quantify how worthy and reliable an interval estimate of a parameter, an index is defined. This index is derived for estimating the mean μ of a normal population. Using this index, a method is devised for determining the sample size formula. Advantages of this new sample size formula are pointed out.  相似文献   

15.
This paper gives a two-sample procedure for selecting the m populations with the largest means from k normal populations with unknown variances. The method is a generalization of a recent work by Ofosu [1973] and hence should find wider practical applications. The experimenter takes an initial sample of preset size N0 from each population and computes an unbiased estimate of its variance. From this estimate he determines the second sample size for the population according to a table presented for this purpose. The populations associated with the m largest overall sample means will be selected. The procedure is shown to satisfy a confidence requirement similar to that of Ofosu.  相似文献   

16.
In forest management surveys, the mean of a variable of interest (Y) in a population composed of N equal area spatial compact elements is increasingly estimated from a model linking Y to an auxiliary vector X known for all elements in the population. It is also desired to have synthetic estimates of the mean of Y in spatially compact domains (forest stands) with no or at most one sample-based observation of Y. We develop three alternative estimators of mean-squared errors (MSE) that reduce the risk of a serious underestimation of the uncertainty in a synthetic estimate of a domain mean in cases where the employed model does not accounts for domain effects nor spatial autocorrelation in unobserved residual errors. Expansions of the estimators including anticipated effects of a spatial autocorrelation in residual errors are also provided. Simulation results indicate that the conventional model-dependent (MD) population-level estimator of variance in a synthetic estimate of a domain mean underestimates uncertainty by a wide margin. Our alternative estimators mitigated, in settings with weak to moderate domain effects and relatively small sample sizes, to a large extent, the problem of underestimating uncertainty. We demonstrate applications with examples from two actual forest inventories.  相似文献   

17.
The planning of bioequivalence (BE) studies, as for any clinical trial, requires a priori specification of an effect size for the determination of power and an assumption about the variance. The specified effect size may be overly optimistic, leading to an underpowered study. The assumed variance can be either too small or too large, leading, respectively, to studies that are underpowered or overly large. There has been much work in the clinical trials field on various types of sequential designs that include sample size reestimation after the trial is started, but these have seen only little use in BE studies. The purpose of this work was to validate at least one such method for crossover design BE studies. Specifically, we considered sample size reestimation for a two-stage trial based on the variance estimated from the first stage. We identified two methods based on Pocock's method for group sequential trials that met our requirement for at most negligible increase in type I error rate.  相似文献   

18.
For the case of a one‐sample experiment with known variance σ2=1, it has been shown that at interim analysis the sample size (SS) may be increased by any arbitrary amount provided: (1) The conditional power (CP) at interim is ?50% and (2) there can be no decision to decrease the SS (stop the trial early). In this paper we verify this result for the case of a two‐sample experiment with proportional SS in the treatment groups and an arbitrary common variance. Numerous authors have presented the formula for the CP at interim for a two‐sample test with equal SS in the treatment groups and an arbitrary common variance, for both the one‐ and two‐sided hypothesis tests. In this paper we derive the corresponding formula for the case of unequal, but proportional SS in the treatment groups for both one‐sided superiority and two‐sided hypothesis tests. Finally, we present an SAS macro for doing this calculation and provide a worked out hypothetical example. In discussion we note that this type of trial design trades the ability to stop early (for lack of efficacy) for the elimination of the Type I error penalty. The loss of early stopping requires that such a design employs a data monitoring committee, blinding of the sponsor to the interim calculations, and pre‐planning of how much and under what conditions to increase the SS and that this all be formally written into an interim analysis plan before the start of the study. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

20.
This article considers the uncertainty of a proportion based on a stratified random sample of a small population. Using the hypergeometric distribution, a Clopper–Pearson type upper confidence bound is presented. Another frequentist approach that uses the estimated variance of the proportion estimator is also considered as well as a Bayesian alternative. These methods are demonstrated with an illustrative example. Some aspects of planning, that is, the impact of specified strata sample sizes, on uncertainty are studied through a simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号