首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

2.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
The problem considered in this paper is that of unbiased estimation of the variance of an exponential distribution using a ranked set sample (RSS). We propose some unbiased estimators each of which is better than the non-parametric minimum variance quadratic unbiased estimator based on a balanced ranked set sample as well as the uniformly minimum variance unbiased estimator based on a simple random sample (SRS) of the same size. Relative performances of the proposed estimators and a few other properties of the estimators including their robustness under imperfect ranking have also been studied.  相似文献   

5.
The maximum likeihood estimate is considered for an intraclass correlation coefficent in a bivariate normal distribution when some observations on either of the varibles are missuing. The estimate is given as the soulution of a polynomial equation of degree seven. An approximate confidence interval and a test procedure for the intraclass correlation are constricted based on an asymptotic variance stabilizing transformation of the resulting estimator. The distributional results are also considered under violation of the normality assumption. A Monte Carlo study was performed to examine the finite sample properties of the maximum likelihood estimator and to evaluate the proposed procedures for hypotheses testing and interval estimation.  相似文献   

6.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

7.
This paper compares minimum distance estimation with best linear unbiased estimation to determine which technique provides the most accurate estimates for location and scale parameters as applied to the three parameter Pareto distribution. Two minimum distance estimators are developed for each of the three distance measures used (Kolmogorov, Cramer‐von Mises, and Anderson‐Darling) resulting in six new estimators. For a given sample size 6 or 18 and shape parameter 1(1)4, the location and scale parameters are estimated. A Monte Carlo technique is used to generate the sample sets. The best linear unbiased estimator and the six minimum distance estimators provide parameter estimates based on each sample set. These estimates are compared using mean square error as the evaluation tool. Results show that the best linear unbaised estimator provided more accurate estimates of location and scale than did the minimum estimators tested.  相似文献   

8.
The authors consider variance estimation for the generalized regression estimator in a two‐phase context when the first‐phase sample has been restratified using information gathered from the first‐phase sample. Simple computational expressions for variance estimation are provided for the double expansion estimator and the reweighted expansion estimator of Kott & Stukel (1997). These estimators are compared using data from the Canadian Retail Commodity Survey.  相似文献   

9.
Abstract. General autoregressive moving average (ARMA) models extend the traditional ARMA models by removing the assumptions of causality and invertibility. The assumptions are not required under a non‐Gaussian setting for the identifiability of the model parameters in contrast to the Gaussian setting. We study M‐estimation for general ARMA processes with infinite variance, where the distribution of innovations is in the domain of attraction of a non‐Gaussian stable law. Following the approach taken by Davis et al. (1992) and Davis (1996) , we derive a functional limit theorem for random processes based on the objective function, and establish asymptotic properties of the M‐estimator. We also consider bootstrapping the M‐estimator and extend the results of Davis & Wu (1997) to the present setting so that statistical inferences are readily implemented. Simulation studies are conducted to evaluate the finite sample performance of the M‐estimation and bootstrap procedures. An empirical example of financial time series is also provided.  相似文献   

10.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

11.
The capture-recapture method is applied to estimate the population size of a target population based on ascertainment data in epidemiological applications. We generalize the three-list case of Chao & Tsay (1998) to situations where more than three lists are available. An estimation procedure is presented using the concept of sample coverage, which can be interpreted as a measure of overlap information among multiple list records. When there is enough overlap, an estimator of the total population size is proposed. The bootstrap method is used to construct a variance estimator and confidence interval. If the overlap rate is relatively low, then the population size cannot be precisely estimated and thus only a lower (upper) bound is proposed for positively (negatively) dependent lists. The proposed method is applied to two data sets, one with a high and one with a low overlap rate.  相似文献   

12.
Sample size reestimation in a crossover, bioequivalence study can be a useful adaptive design tool, particularly when the intrasubject variability of the drug formulation under investigation is not well understood. When sample size reestimation is done based on an interim estimate of the intrasubject variability and bioequivalence is tested using the pooled estimate of intrasubject variability, type 1 error inflation will occur. Type 1 error inflation is caused by the pooled estimate being a biased estimator of the intrasubject variability. The type 1 error inflation and bias of the pooled estimator of variability are well characterized in the setting of a two‐arm, parallel study. The purpose of this work is to extend this characterization to the setting of a crossover, bioequivalence study with sample size reestimation and to propose an estimator of the intrasubject variability that will prevent type 1 error inflation.  相似文献   

13.
Variance estimation of changes requires estimates of variances and covariances that would be relatively straightforward to make if the sample remained the same from one wave to the next, but this is rarely the case in practice as successive waves are usually different overlapping samples. The author proposes a design‐based estimator for covariance matrices that is adapted to this situation. Under certain conditions, he shows that his approach yields non‐negative definite estimates for covariance matrices and therefore positive variance estimates for a large class of measures of change.  相似文献   

14.
Finding an interval estimation procedure for the variance of a population that achieves a specified confidence level can be problematic. If the distribution of the population is known, then a distribution-dependent interval for the variance can be obtained by considering a power transformation of the sample variance. Simulation results suggest that this method produces intervals for the variance that maintain the nominal probability of coverage for a wide variety of distributions. If the underlying distribution is unknown, then the power itself must be estimated prior to forming the endpoints of the interval. The result is a distribution-free confidence interval estimator of the population variance. Simulation studies indicate that the power transformation method compares favorably to the logarithmic transformation method and the nonparametric bias-corrected and accelerated bootstrap method for moderately sized samples. However, two applications, one in forestry and the other in health sciences, demonstrate that no single method is best for all scenarios.  相似文献   

15.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Donor imputation is frequently used in surveys. However, very few variance estimation methods that take into account donor imputation have been developed in the literature. This is particularly true for surveys with high sampling fractions using nearest donor imputation, often called nearest‐neighbour imputation. In this paper, the authors develop a variance estimator for donor imputation based on the assumption that the imputed estimator of a domain total is approximately unbiased under an imputation model; that is, a model for the variable requiring imputation. Their variance estimator is valid, irrespective of the magnitude of the sampling fractions and the complexity of the donor imputation method, provided that the imputation model mean and variance are accurately estimated. They evaluate its performance in a simulation study and show that nonparametric estimation of the model mean and variance via smoothing splines brings robustness with respect to imputation model misspecifications. They also apply their variance estimator to real survey data when nearest‐neighbour imputation has been used to fill in the missing values. The Canadian Journal of Statistics 37: 400–416; 2009 © 2009 Statistical Society of Canada  相似文献   

17.
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
This report presents numerical results of an approach for parameter estimation and hypothesis testing that does not rely on specific assumptions about the underlying distribution of errors in the measured data. This approach combines robust estimation procedures, the bootstrap method for estimation of parameter uncertainties, permutation techniques for hypothesis testing, and adaptive approaches to estimation in order to obtain the minimum variance estimator or test statistic (within a predefined class) for the data under consideration. The technique produces efficient estimators of central tendency and powerful test statistics, even for small sample sizes. (Portions of this work have been presented in preliminary form (Turkheimer et al., 1996)).  相似文献   

19.
We propose a novel “bias-corrected realized variance” (BCRV) estimator based upon the appropriate re-weighting of two realized variances calculated at different sampling frequencies. Our bias-correction methodology is found to be extremely accurate, with the finite sample variance being significantly minimized. In our Monte Carlo experiments and a finite sample MSE comparison of alternative estimators, the performance of our straightforward BCRV estimator is shown to be comparable to other widely-used integrated variance estimators. Given its simplicity, our BCRV estimator is likely to appeal to researchers and practitioners alike for the estimation of integrated variance.  相似文献   

20.
Variance estimation is a fundamental problem in statistical modelling. In ultrahigh dimensional linear regression where the dimensionality is much larger than the sample size, traditional variance estimation techniques are not applicable. Recent advances in variable selection in ultrahigh dimensional linear regression make this problem accessible. One of the major problems in ultrahigh dimensional regression is the high spurious correlation between the unobserved realized noise and some of the predictors. As a result, the realized noises are actually predicted when extra irrelevant variables are selected, leading to serious underestimate of the level of noise. We propose a two-stage refitted procedure via a data splitting technique, called refitted cross-validation, to attenuate the influence of irrelevant variables with high spurious correlations. Our asymptotic results show that the resulting procedure performs as well as the oracle estimator, which knows in advance the mean regression function. The simulation studies lend further support to our theoretical claims. The naive two-stage estimator and the plug-in one-stage estimators using the lasso and smoothly clipped absolute deviation are also studied and compared. Their performances can be improved by the reffitted cross-validation method proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号