首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Summary.  Efron's biased coin design is a well-known randomization technique that helps to neutralize selection bias in sequential clinical trials for comparing treatments, while keeping the experiment fairly balanced. Extensions of the biased coin design have been proposed by several researchers who have focused mainly on the large sample properties of their designs. We modify Efron's procedure by introducing an adjustable biased coin design, which is more flexible than his. We compare it with other existing coin designs; in terms of balance and lack of predictability, its performance for small samples appears in many cases to be an improvement with respect to the other sequential randomized allocation procedures.  相似文献   

2.
Outlining some recently obtained results of Hu and Rosenberger [2003. Optimality, variability, power: evaluating response-adaptive randomization procedures for treatment comparisons. J. Amer. Statist. Assoc. 98, 671–678] and Chen [2006. The power of Efron's biased coin design. J. Statist. Plann. Inference 136, 1824–1835] on the relationship between sequential randomized designs and the power of the usual statistical procedures for testing the equivalence of two competing treatments, the aim of this paper is to provide theoretical proofs of the numerical results of Chen [2006. The power of Efron's biased coin design. J. Statist. Plann. Inference 136, 1824–1835]. Furthermore, we prove that the Adjustable Biased Coin Design [Baldi Antognini A., Giovagnoli, A., 2004. A new “biased coin design” for the sequential allocation of two treatments. J. Roy. Statist. Soc. Ser. C 53, 651–664] is uniformly more powerful than the other “coin” designs proposed in the literature for any sample size.  相似文献   

3.
The power of a statistical test depends on the sample size. Moreover, in a randomized trial where two treatments are compared, the power also depends on the number of assignments of each treatment. We can treat the power as the conditional probability of correctly detecting a treatment effect given a particular treatment allocation status. This paper uses a simple z-test and a t-test to demonstrate and analyze the power function under the biased coin design proposed by Efron in 1971. We numerically show that Efron's biased coin design is uniformly more powerful than the perfect simple randomization.  相似文献   

4.
With a growing interest in using non-representative samples to train prediction models for numerous outcomes it is necessary to account for the sampling design that gives rise to the data in order to assess the generalized predictive utility of a proposed prediction rule. After learning a prediction rule based on a non-uniform sample, it is of interest to estimate the rule's error rate when applied to unobserved members of the population. Efron (1986) proposed a general class of covariance penalty inflated prediction error estimators that assume the available training data are representative of the target population for which the prediction rule is to be applied. We extend Efron's estimator to the complex sample context by incorporating Horvitz–Thompson sampling weights and show that it is consistent for the true generalization error rate when applied to the underlying superpopulation. The resulting Horvitz–Thompson–Efron estimator is equivalent to dAIC, a recent extension of Akaike's information criteria to survey sampling data, but is more widely applicable. The proposed methodology is assessed with simulations and is applied to models predicting renal function obtained from the large-scale National Health and Nutrition Examination Study survey. The Canadian Journal of Statistics 48: 204–221; 2020 © 2019 Statistical Society of Canada  相似文献   

5.
In the independent setting, both Efron's bootstrap and “empiricai Edgeworth expansion” (E.E-expansion) give second-order accurate approximations to distributions of standardized and studentized statistics in the smooth function model. As a result, Efron's bootstrap was often regarded as roughly equivalent to the one-term E.E-expansion. However, a more detailed analysis shows that Efron's bootstrap outperforms the E.E-expansion in terms of loss functions by Bhattacharya and Qumsiyeh (1989) and in terms of probabilities for large deviations by Hall (1990) and Jing et a1 (1994). in this paper, we shall study the performances of the block bootstrap and the E.E-expansion for the weakly dependent data. It turns out that similar properties hold:both perform equally well at the center of the distribution but the block bootstrap provides accurate approximations even in the tails of the distributions. The study is focued on the simple case of standardized and studentized sample mean, but the conclusions can be easily extended to the smooth function of multivariate means.  相似文献   

6.
In this paper, we propose a design that uses a short‐term endpoint for accelerated approval at interim analysis and a long‐term endpoint for full approval at final analysis with sample size adaptation based on the long‐term endpoint. Two sample size adaptation rules are compared: an adaptation rule to maintain the conditional power at a prespecified level and a step function type adaptation rule to better address the bias issue. Three testing procedures are proposed: alpha splitting between the two endpoints; alpha exhaustive between the endpoints; and alpha exhaustive with improved critical value based on correlation. Family‐wise error rate is proved to be strongly controlled for the two endpoints, sample size adaptation, and two analysis time points with the proposed designs. We show that using alpha exhaustive designs greatly improve the power when both endpoints are effective, and the power difference between the two adaptation rules is minimal. The proposed design can be extended to more general settings. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
This article compares the properties of two balanced randomization schemes with several treatments under non-uniform allocation probabilities. According to the first procedure, the so-called truncated multinomial randomization design, the process employs a given allocation distribution, until a treatment receives its quota of subjects, after which this distribution switches to the conditional distribution for the remaining treatments, and so on. The second scheme, the random allocation rule, selects at random any legitimate assignment of the given number of subjects per treatment. The behavior of these two schemes is shown to be quite different: the truncated multinomial randomization design's assignment probabilities to a treatment turn out to vary over the recruitment period, and its accidental bias can be large, whereas the random allocation rule's this bias is bounded. The limiting distributions of the instants at which a treatment receives the given number of subjects is shown to be that of weighted spacings for normal order statistics with different variances. Formulas for the selection bias of both procedures are also derived.  相似文献   

8.
The cube method proposed by Deville and Tillé (2004) enables the selection of balanced samples: that is, samples such that the Horvitz-Thompson estimators of auxiliary variables match the known totals of those variables. As an exact balanced sampling design often does not exist, the cube method generally proceeds in two steps: a “flight phase” in which exact balance is maintained, and a “landing phase” in which the final sample is selected while respecting the balance conditions as closely as possible. Deville and Tillé (2005) derive a variance approximation for balanced sampling that takes account of the flight phase only, whereas the landing phase can prove to add non-negligible variance. This paper uses a martingale difference representation of the cube method to construct an efficient simulation-based method for calculating approximate second-order inclusion probabilities. The approximation enables nearly unbiased variance estimation, where the bias is primarily due to the limited number of simulations. In a Monte Carlo study, the proposed method has significantly less bias than the standard variance estimator, leading to improved confidence interval coverage.  相似文献   

9.
In mixed models the mean square error (MSE) of empirical best linear unbiased estimators generally cannot be written in closed form. Unlike traditional methods of inference, parametric bootstrapping does not require approximation of this MSE or the test statistic distribution. Data were simulated to compare coverage rates for intervals based on the naïve MSE approximation and the method of Kenward and Roger, and parametric bootstrap intervals (Efron's percentile, Hall's percentile, bootstrap-t). The Kenward–Roger method performed best and the bootstrap-t almost as well. Intervals were also compared for a small set of real data. Implications for minimum sample size are discussed.  相似文献   

10.
We study the problem of fitting a heteroscedastic median regression model from left-truncated and right-censored data. It is demonstrated that the adapted Efron's self-consistency equation of McKeague et al. (2001) can be extended to analyze left-truncated and right-censored data. We evaluate the finite sample performance of the proposed estimators through simulation studies.  相似文献   

11.
The performance of the sampling strategy used in the Botswana Aids Impact Survey II (BAISII) has been studied in detail under a randomized response technique. We have shown that alternative strategies based on the Rao–Harley–Cochran (RHC) sampling scheme for the selection of first stage units perform much better than other strategies. In particular, the combination RHC for the selection of first stage units (fsu's) and systematic sampling for the selection of second stage units (ssu's) perform the best when the sample size is small where as the RHC and SRSWOR perform the best when the sample size is large. In view of the present findings it is recommended that the BAISII survey should be studied in more detail incorporating more indicators and increased sample sizes. This is because the BAISII survey design is extensively in use for large scales surveys in Southern African countries.  相似文献   

12.
Asymptotic bias formulae are obtained for Heckman's two step estimator under misspecification of the single equation Tobit modelj and the two equation sample selection model. Asymptotic biases are also obtained for the ordinary least squares estimator based on uncensored observations only. Omitted variables, errors in variables, and heteroskedasticity are considered as sources of misspecification. The biases are illustrated by numerical examples, in which the Tobit maximum likelihood estimator is also included. Severe consequences for the two step estimator are indicated.  相似文献   

13.
Heckman’s two-step procedure (Heckit) for estimating the parameters in linear models from censored data is frequently used by econometricians, despite of the fact that earlier studies cast doubt on the procedure. In this paper it is shown that estimates of the hazard h for approaching the censoring limit, the latter being used as an explanatory variable in the second step of the Heckit, can induce multicollinearity. The influence of the censoring proportion and sample size upon bias and variance in three types of random linear models are studied by simulations. From these results a simple relation is established that describes how absolute bias depends on the censoring proportion and the sample size. It is also shown that the Heckit may work with non-normal (Laplace) distributions, but it collapses if h deviates too much from that of the normal distribution. Data from a study of work resumption after sick-listing are used to demonstrate that the Heckit can be very risky.  相似文献   

14.
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha‐spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha‐spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.  相似文献   

15.
In real‐data analysis, deciding the best subset of variables in regression models is an important problem. Akaike's information criterion (AIC) is often used in order to select variables in many fields. When the sample size is not so large, the AIC has a non‐negligible bias that will detrimentally affect variable selection. The present paper considers a bias correction of AIC for selecting variables in the generalized linear model (GLM). The GLM can express a number of statistical models by changing the distribution and the link function, such as the normal linear regression model, the logistic regression model, and the probit model, which are currently commonly used in a number of applied fields. In the present study, we obtain a simple expression for a bias‐corrected AIC (corrected AIC, or CAIC) in GLMs. Furthermore, we provide an ‘R’ code based on our formula. A numerical study reveals that the CAIC has better performance than the AIC for variable selection.  相似文献   

16.
This paper considers the problem of testing for nonzero values of the equicorrelation coefficient of a standard symmetric multivariate normal distribution. Recently, SenGupta (1987) proposed a locally best test. We construct a beta-optimal test and present selected one and five percent critical values. An empirical power comparison of SenGupta's test with two versions of the beta-optimal test and the power envelope shows the relative strengths of the three tests. It also allows us to assess and confirm Efron's (1975) rule of when to question the use of a locally best test, at least for this testing problem. On the basis of these results, we argue that the two beta-optimal tests can be considered as approximately uniformly most powerful tests, at least at the five percent significance level.  相似文献   

17.
A study of the distribution of a statistic involves two major steps: (a) working out its asymptotic, large n, distribution, and (b) making the connection between the asymptotic results and the distribution of the statistic for the sample sizes used in practice. This crucial second step is not included in many studies. In this article, the second step is applied to Durbin's (1951) well-known rank test of treatment effects in balanced incomplete block designs (BIB's). We found that asymptotic, χ2, distributions do not provide adequate approximations in most BIB's. Consequently, we feel that several of Durbin's recommendations should be altered.  相似文献   

18.
Regularized variable selection is a powerful tool for identifying the true regression model from a large number of candidates by applying penalties to the objective functions. The penalty functions typically involve a tuning parameter that controls the complexity of the selected model. The ability of the regularized variable selection methods to identify the true model critically depends on the correct choice of the tuning parameter. In this study, we develop a consistent tuning parameter selection method for regularized Cox's proportional hazards model with a diverging number of parameters. The tuning parameter is selected by minimizing the generalized information criterion. We prove that, for any penalty that possesses the oracle property, the proposed tuning parameter selection method identifies the true model with probability approaching one as sample size increases. Its finite sample performance is evaluated by simulations. Its practical use is demonstrated in The Cancer Genome Atlas breast cancer data.  相似文献   

19.
Simulation data are used to test a student's beliefs about the relative probabilities of two sequences obtained by flipping a fair coin. The episode is used to illustrate general issues in using simulations instructionally.  相似文献   

20.
Kernel discriminant analysis translates the original classification problem into feature space and solves the problem with dimension and sample size interchanged. In high‐dimension low sample size (HDLSS) settings, this reduces the ‘dimension’ to that of the sample size. For HDLSS two‐class problems we modify Mika's kernel Fisher discriminant function which – in general – remains ill‐posed even in a kernel setting; see Mika et al. (1999). We propose a kernel naive Bayes discriminant function and its smoothed version, using first‐ and second‐degree polynomial kernels. For fixed sample size and increasing dimension, we present asymptotic expressions for the kernel discriminant functions, discriminant directions and for the error probability of our kernel discriminant functions. The theoretical calculations are complemented by simulations which show the convergence of the estimators to the population quantities as the dimension grows. We illustrate the performance of the new discriminant rules, which are easy to implement, on real HDLSS data. For such data, our results clearly demonstrate the superior performance of the new discriminant rules, and especially their smoothed versions, over Mika's kernel Fisher version, and typically also over the commonly used naive Bayes discriminant rule.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号