首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
When using multilevel regression models that incorporate cluster-specific random effects, the Wald and the likelihood ratio (LR) tests are used for testing the null hypothesis that the variance of the random effects distribution is equal to zero. We conducted a series of Monte Carlo simulations to examine the effect of the number of clusters and the number of subjects per cluster on the statistical power to detect a non-null random effects variance and to compare the empirical type I error rates of the Wald and LR tests. Statistical power increased with increasing number of clusters and number of subjects per cluster. Statistical power was greater for the LR test than for the Wald test. These results applied to both the linear and logistic regressions, but were more pronounced for the latter. The use of the LR test is preferable to the use of the Wald test.  相似文献   

2.
In a recent volume of this journal, Holden [Testing the normality assumption in the Tobit Model, J. Appl. Stat. 31 (2004) pp. 521–532] presents Monte Carlo evidence comparing several tests for departures from normality in the Tobit Model. This study adds to the work of Holden by considering another test, and several information criteria, for detecting departures from normality in the Tobit Model. The test given here is a modified likelihood ratio statistic based on a partially adaptive estimator of the Censored Regression Model using the approach of Caudill [A partially adaptive estimator for the Censored Regression Model based on a mixture of normal distributions, Working Paper, Department of Economics, Auburn University, 2007]. The information criteria examined include the Akaike’s Information Criterion (AIC), the Consistent AIC (CAIC), the Bayesian information criterion (BIC), and the Akaike’s BIC (ABIC). In terms of fewest ‘rejections’ of a true null, the best performance is exhibited by the CAIC and the BIC, although, like some of the statistics examined by Holden, there are computational difficulties with each.  相似文献   

3.
The goal of the current paper is to compare consistent and inconsistent model selection criteria by looking at their convergence rates (to be defined in the first section). The prototypes of the two types of criteria are the AIC and BIC criterion respectively. For linear regression models with normally distributed errors, we show that the convergence rates for AIC and BIC are 0(n-1) and 0((n log n)-1/2) respectively. When the error distributions are unknown, the two criteria become indistinguishable, all having convergence rate O(n-1/2). We also argue that the BIC criterion has nearly optimal convergence rate. The results partially justified some of the controversial simulation results in which inconsistent criteria seem to outperform consistent ones.  相似文献   

4.
This research examines the Type I error rates obtained when using the mixed model with the Kenward-Roger correction and compares them with the Between–Within and Satterthwaite approaches in split-plot designs. A simulated study was conducted to generate repeated measures data with small samples under normal distribution conditions. The data were obtained via three covariance matrices (unstructured, heterogeneous first-order auto-regressive, and random coefficients), the one with the best fit being selected according to the Akaike criterion. The results of the simulation study showed the Kenward-Roger test to be more robust, particularly when the population covariance matrices were unstructured or heterogeneous first-order auto-regressive. The main contribution of this study lies in showing that the Kenward–Roger method corrects the liberal Type I error rates obtained with the Between–Within and Satterthwaite approaches, especially with positive pairings between group sizes and covariance matrices.  相似文献   

5.
K correlated 2×2 tables with structural zero are commonly encountered in infectious disease studies. A hypothesis test for risk difference is considered in K independent 2×2 tables with structural zero in this paper. Score statistic, likelihood ratio statistic and Wald‐type statistic are proposed to test the hypothesis on the basis of stratified data and pooled data. Sample size formulae are derived for controlling a pre‐specified power or a pre‐determined confidence interval width. Our empirical results show that score statistic and likelihood ratio statistic behave better than Wald‐type statistic in terms of type I error rate and coverage probability, sample sizes based on stratified test are smaller than those based on the pooled test in the same design. A real example is used to illustrate the proposed methodologies. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
This paper investigates, by means of Monte Carlo simulation, the effects of different choices of order for autoregressive approximation on the fully efficient parameter estimates for autoregressive moving average models. Four order selection criteria, AIC, BIC, HQ and PKK, were compared and different model structures with varying sample sizes were used to contrast the performance of the criteria. Some asymptotic results which provide a useful guide for assessing the performance of these criteria are presented. The results of this comparison show that there are marked differences in the accuracy implied using these alternative criteria in small sample situations and that it is preferable to apply BIC criterion, which leads to greater precision of Gaussian likelihood estimates, in such cases. Implications of the findings of this study for the estimation of time series models are highlighted.  相似文献   

7.
Several unconditional exact tests, which are constructed to control the Type I error rate at the nominal level, for comparing two independent Poisson rates are proposed and compared to the conditional exact test using a binomial distribution. The unconditional exact test using binomial p-value, likelihood ratio, or efficient score as the test statistic improves the power in general, and are therefore recommended. Unconditional exact tests using Wald statistics, whether on the original or square-root scale, may be substantially less powerful than the conditional exact test, and is not recommended. An example is provided from a cardiovascular trial.  相似文献   

8.
ABSTRACT

Inflated data are prevalent in many situations and a variety of inflated models with extensions have been derived to fit data with excessive counts of some particular responses. The family of information criteria (IC) has been used to compare the fit of models for selection purposes. Yet despite the common use in statistical applications, there are not too many studies evaluating the performance of IC in inflated models. In this study, we studied the performance of IC for data with dual-inflated data. The new zero- and K-inflated Poisson (ZKIP) regression model and conventional inflated models including Poisson regression and zero-inflated Poisson (ZIP) regression were fitted for dual-inflated data and the performance of IC were compared. The effect of sample sizes and the proportions of inflated observations towards selection performance were also examined. The results suggest that the Bayesian information criterion (BIC) and consistent Akaike information criterion (CAIC) are more accurate than the Akaike information criterion (AIC) in terms of model selection when the true model is simple (i.e. Poisson regression (POI)). For more complex models, such as ZIP and ZKIP, the AIC was consistently better than the BIC and CAIC, although it did not reach high levels of accuracy when sample size and the proportion of zero observations were small. The AIC tended to over-fit the data for the POI, whereas the BIC and CAIC tended to under-parameterize the data for ZIP and ZKIP. Therefore, it is desirable to study other model selection criteria for dual-inflated data with small sample size.  相似文献   

9.
The parametric bootstrap tests and the asymptotic or approximate tests for detecting difference of two Poisson means are compared. The test statistics used are the Wald statistics with and without log-transformation, the Cox F statistic and the likelihood ratio statistic. It is found that the type I error rate of an asymptotic/approximate test may deviate too much from the nominal significance level α under some situations. It is recommended that we should use the parametric bootstrap tests, under which the four test statistics are similarly powerful and their type I error rates are all close to α. We apply the tests to breast cancer data and injurious motor vehicle crash data.  相似文献   

10.
The Akaike Information Criterion (AIC) is developed for selecting the variables of the nested error regression model where an unobservable random effect is present. Using the idea of decomposing the likelihood into two parts of “within” and “between” analysis of variance, we derive the AIC when the number of groups is large and the ratio of the variances of the random effects and the random errors is an unknown parameter. The proposed AIC is compared, using simulation, with Mallows' C p , Akaike's AIC, and Sugiura's exact AIC. Based on the rates of selecting the true model, it is shown that the proposed AIC performs better.  相似文献   

11.
Heterogeneity of variances of treatment groups influences the validity and power of significance tests of location in two distinct ways. First, if sample sizes are unequal, the Type I error rate and power are depressed if a larger variance is associated with a larger sample size, and elevated if a larger variance is associated with a smaller sample size. This well-established effect, which occurs in t and F tests, and to a lesser degree in nonparametric rank tests, results from unequal contributions of pooled estimates of error variance in the computation of test statistics. It is observed in samples from normal distributions, as well as non-normal distributions of various shapes. Second, transformation of scores from skewed distributions with unequal variances to ranks produces differences in the means of the ranks assigned to the respective groups, even if the means of the initial groups are equal, and a subsequent inflation of Type I error rates and power. This effect occurs for all sample sizes, equal and unequal. For the t test, the discrepancy diminishes, and for the Wilcoxon–Mann–Whitney test, it becomes larger, as sample size increases. The Welch separate-variance t test overcomes the first effect but not the second. Because of interaction of these separate effects, the validity and power of both parametric and nonparametric tests performed on samples of any size from unknown distributions with possibly unequal variances can be distorted in unpredictable ways.  相似文献   

12.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
Gottman's version of the Mann and Wald asymptotic test for intervention effects in time-series data is presented as a useful small sample procedure. A Monte Carlo simulaltion is conducted to evaluate the procedure for controlling Type I errors with varying values of autoregressive coefficients. Results indicate the procedure works better than Gottman's work originally indicated. However, in some cases error rates can be unacceptably high. Procedures for evaluating changes in level in the presence of autocorrelation and slope are suggested and evaluated.  相似文献   

14.
In this article we consider the two-way ANOVA model without interaction under heteroscedasticity. For the problem of testing equal effects of factors, we propose a parametric bootstrap (PB) approach and compare it with existing the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test performs better than the GF test. The PB test performs very satisfactorily even for small samples while the GF test exhibits poor Type I error properties when the number of factorial combinations or treatments goes up. It is also noted that the same tests can be used to test the significance of random effect variance component in a two-way mixed-effects model under unequal error variances.  相似文献   

15.
Stock & Watson (1999) consider the relative quality of different univariate forecasting techniques. This paper extends their study on forecasting practice, comparing the forecasting performance of two popular model selection procedures, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). This paper considers several topics: how AIC and BIC choose lags in autoregressive models on actual series, how models so selected forecast relative to an AR(4) model, the effect of using a maximum lag on model selection, and the forecasting performance of combining AR(4), AIC, and BIC models with an equal weight.  相似文献   

16.
We evaluated the properties of six statistical methods for testing equality among populations with zero-inflated continuous distributions. These tests are based on likelihood ratio (LR), Wald, central limit theorem (CLT), modified CLT (MCLT), parametric jackknife (PJ), and nonparametric jackknife (NPJ) statistics. We investigated their statistical properties using simulated data from mixed distributions with an unknown portion of non zero observations that have an underlying gamma, exponential, or log-normal density function and the remaining portion that are excessive zeros. The 6 statistical tests are compared in terms of their empirical Type I errors and powers estimated through 10,000 repeated simulated samples for carefully selected configurations of parameters. The LR, Wald, and PJ tests are preferred tests since their empirical Type I errors were close to the preset nominal 0.05 level and each demonstrated good power for rejecting null hypotheses when the sample sizes are at least 125 in each group. The NPJ test had unacceptable empirical Type I errors because it rejected far too often while the CLT and MCLT tests had low testing powers in some cases. Therefore, these three tests are not recommended for general use but the LR, Wald, and PJ tests all performed well in large sample applications.  相似文献   

17.
In this article, we propose a new empirical information criterion (EIC) for model selection which penalizes the likelihood of the data by a non-linear function of the number of parameters in the model. It is designed to be used where there are a large number of time series to be forecast. However, a bootstrap version of the EIC can be used where there is a single time series to be forecast. The EIC provides a data-driven model selection tool that can be tuned to the particular forecasting task.

We compare the EIC with other model selection criteria including Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC). The comparisons show that for the M3 forecasting competition data, the EIC outperforms both the AIC and BIC, particularly for longer forecast horizons. We also compare the criteria on simulated data and find that the EIC does better than existing criteria in that case also.  相似文献   

18.
We are interested in comparing logistic regressions for several test treatments or populations with a logistic regression for a standard treatment or population. The research was motivated by some real life problems, which are discussed as data examples. We propose a step-down likelihood ratio method for declaring differences between the test treatments or populations and the standard treatment or population. Competitors based on the sequentially rejective Bonferroni Wald statistic, sequentially rejective exact Wald statistic and Reiers?l's statistic are also discussed. It is shown that the proposed method asymptotically controls the probability of type I error. A Monte Carlo simulation shows that the proposed method performs well for relatively small sample sizes, outperforming its competitors.  相似文献   

19.
This study examined the influence of heterogeneity of variance on Type I error rates and power of the independent-samples Student's t-test of equality of means on samples of scores from normal and 10 non-normal distributions. The same test of equality of means was performed on corresponding rank-transformed scores. For many non-normal distributions, both versions produced anomalous power functions, resulting partly from the fact that the hypothesis test was biased, so that under some conditions, the probability of rejecting H 0 decreased as the difference between means increased. In all cases where bias occurred, the t-test on ranks exhibited substantially greater bias than the t-test on scores. This anomalous result was independent of the more familiar changes in Type I error rates and power attributable to unequal sample sizes combined with unequal variances.  相似文献   

20.
Asymptotically, the Wald‐type test for generalised estimating equations (GEE) models can control the type I error rate at the nominal level. However in small sample studies, it may lead to inflated type I error rates. Even with currently available small sample corrections for the GEE Wald‐type test, the type I error rate inflation is still serious when the tested contrast is multidimensional. This paper extends the ANOVA‐type test for heteroscedastic factorial designs to GEE and shows that the proposed ANOVA‐type test can also control the type I error rate at the nominal level in small sample studies while still maintaining robustness with respect to mis‐specification of the working correlation matrix. Differences of inference between the Wald‐type test and the proposed test are observed in a two‐way repeated measures ANOVA model for a diet‐induced obesity study and a two‐way repeated measures logistic regression for a collagen‐induced arthritis study. Simulation studies confirm that the proposed test has better control of the type I error rate than the Wald‐type test in small sample repeated measures models. Additional simulation studies further show that the proposed test can even achieve larger power than the Wald‐type test in some cases of the large sample repeated measures ANOVA models that were investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号