首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Real world data often fail to meet the underlying assumption of population normality. The Rank Transformation (RT) procedure has been recommended as an alternative to the parametric factorial analysis of covariance (ANCOVA). The purpose of this study was to compare the Type I error and power properties of the RT ANCOVA to the parametric procedure in the context of a completely randomized balanced 3 × 4 factorial layout with one covariate. This study was concerned with tests of homogeneity of regression coefficients and interaction under conditional (non)normality. Both procedures displayed erratic Type I error rates for the test of homogeneity of regression coefficients under conditional nonnormality. With all parametric assumptions valid, the simulation results demonstrated that the RT ANCOVA failed as a test for either homogeneity of regression coefficients or interaction due to severe Type I error inflation. The error inflation was most severe when departures from conditional normality were extreme. Also associated with the RT procedure was a loss of power. It is recommended that the RT procedure not be used as an alternative to factorial ANCOVA despite its encouragement from SAS, IMSL, and other respected sources.  相似文献   

2.
Most multivariate statistical techniques rely on the assumption of multivariate normality. The effects of nonnormality on multivariate tests are assumed to be negligible when variance–covariance matrices and sample sizes are equal. Therefore, in practice, investigators usually do not attempt to assess multivariate normality. In this simulation study, the effects of skewed and leptokurtic multivariate data on the Type I error and power of Hotelling's T 2 were examined by manipulating distribution, sample size, and variance–covariance matrix. The empirical Type I error rate and power of Hotelling's T 2 were calculated before and after the application of generalized Box–Cox transformation. The findings demonstrated that even when variance–covariance matrices and sample sizes are equal, small to moderate changes in power still can be observed.  相似文献   

3.
We compared the robustness of univariate and multivariate statistical procedures to control Type I error rates when the normality and homocedasticity assumptions were not fulfilled. The procedures we evaluated are the mixed model adjusted by means of the SAS Proc Mixed module, and Bootstrap-F approach, Brown–Forsythe multivariate approach, Welch–James multivariate approach, and Welch–James multivariate approach with robust estimators. The results suggest that the Kenward Roger, Brown–Forsythe, Welch–James, and Improved Generalized Aprroximate procedures satisfactorily kept Type I error rates within the nominal levels for both the main and interaction effects under most of the conditions assessed.  相似文献   

4.
The empirical likelihood (EL) technique is a powerful nonparametric method with wide theoretical and practical applications. In this article, we use the EL methodology in order to develop simple and efficient goodness-of-fit tests for normality based on the dependence between moments that characterizes normal distributions. The new empirical likelihood ratio (ELR) tests are exact and are shown to be very powerful decision rules based on small to moderate sample sizes. Asymptotic results related to the Type I error rates of the proposed tests are presented. We present a broad Monte Carlo comparison between different tests for normality, confirming the preference of the proposed method from a power perspective. A real data example is provided.  相似文献   

5.
Inference for the general linear model makes several assumptions, including independence of errors, normality, and homogeneity of variance. Departure from the latter two of these assumptions may indicate the need for data transformation or removal of outlying observations. Informal procedures such as diagnostic plots of residuals are frequently used to assess the validity of these assumptions or to identify possible outliers. A simulation-based approach is proposed, which facilitates the interpretation of various diagnostic plots by adding simultaneous tolerance bounds. Several tests exist for normality or homoscedasticity in simple random samples. These tests are often applied to residuals from a linear model fit. The resulting procedures are approximate in that correlation among residuals is ignored. The simulation-based approach accounts for the correlation structure of residuals in the linear model and allows simultaneously checking for possible outliers, non normality, and heteroscedasticity, and it does not rely on formal testing.

[Supplementary materials are available for this article. Go to the publisher's online edition of Communications in Statistics—Simulation and Computation® for the following three supplemental resource: a word file containing figures illustrating the mode of operation for the bisectional algorithm, QQ-plots, and a residual plot for the mussels data.]  相似文献   

6.
Traditionally, an assessment for grain yield of rice is to split it into the yield components, including the number of panicles per plant, the number of spikelets per panicle, the 1000-grain weight and the filled-spikelet percentage, such that the yield performance can be individually evaluated through each component, and the products of yield components are employed for grain yield comparisons. However, when using the standard statistical methods, such as the two-sample t-test and analysis of variance, the assumptions of normality and variance homogeneity cannot be fully justified for comparing the grain yields, leading to that the empirical sizes cannot be adequately controlled. In this study, based on the concepts of generalized test variables and generalized p-values, a novel statistical testing procedure is developed for grain yield comparisons of rice. The proposed method is assessed by a series of numerical simulations. According to the simulation results, the proposed method performs reasonably well in Type I error control and empirical power. In addition, a real-life field experiment is analyzed by the proposed method, some productive rice varieties are screened out and suggested for a follow-up investigation.  相似文献   

7.
The most common forecasting methods in business are based on exponential smoothing, and the most common time series in business are inherently non‐negative. Therefore it is of interest to consider the properties of the potential stochastic models underlying exponential smoothing when applied to non‐negative data. We explore exponential smoothing state space models for non‐negative data under various assumptions about the innovations, or error, process. We first demonstrate that prediction distributions from some commonly used state space models may have an infinite variance beyond a certain forecasting horizon. For multiplicative error models that do not have this flaw, we show that sample paths will converge almost surely to zero even when the error distribution is non‐Gaussian. We propose a new model with similar properties to exponential smoothing, but which does not have these problems, and we develop some distributional properties for our new model. We then explore the implications of our results for inference, and compare the short‐term forecasting performance of the various models using data on the weekly sales of over 300 items of costume jewelry. The main findings of the research are that the Gaussian approximation is adequate for estimation and one‐step‐ahead forecasting. However, as the forecasting horizon increases, the approximate prediction intervals become increasingly problematic. When the model is to be used for simulation purposes, a suitably specified scheme must be employed.  相似文献   

8.
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient.  相似文献   

9.
Motivated by the need to analyze the National Longitudinal Surveys data, we propose a new semiparametric longitudinal mean‐covariance model in which the effects on dependent variable of some explanatory variables are linear and others are non‐linear, while the within‐subject correlations are modelled by a non‐stationary autoregressive error structure. We develop an estimation machinery based on least squares technique by approximating non‐parametric functions via B‐spline expansions and establish the asymptotic normality of parametric estimators as well as the rate of convergence for the non‐parametric estimators. We further advocate a new model selection strategy in the varying‐coefficient model framework, for distinguishing whether a component is significant and subsequently whether it is linear or non‐linear. Besides, the proposed method can also be employed for identifying the true order of lagged terms consistently. Monte Carlo studies are conducted to examine the finite sample performance of our approach, and an application of real data is also illustrated.  相似文献   

10.
This paper considers the problem of variance estimation for sparse ultra-high dimensional varying coefficient models. We first use B-spline to approximate the coefficient functions, and discuss the asymptotic behavior of a naive two-stage estimator of error variance. We also reveal that this naive estimator may significantly underestimate the error variance due to the spurious correlations, which are even higher for nonparametric models than linear models. This prompts us to propose an accurate estimator of the error variance by effectively integrating the sure independence screening and the refitted cross-validation techniques. The consistency and the asymptotic normality of the resulting estimator are established under some regularity conditions. The simulation studies are carried out to assess the finite sample performance of the proposed methods.  相似文献   

11.
One of the most basic topics in many introductory statistical methods texts is inference for a population mean, μ. The primary tool for confidence intervals and tests is the Student t sampling distribution. Although the derivation requires independent identically distributed normal random variables with constant variance, σ2, most authors reassure the readers about some robustness to the normality and constant variance assumptions. Some point out that if one is concerned about assumptions, one may statistically test these prior to reliance on the Student t. Most software packages provide optional test results for both (a) the Gaussian assumption and (b) homogeneity of variance. Many textbooks advise only informal graphical assessments, such as certain scatterplots for independence, others for constant variance, and normal quantile–quantile plots for the adequacy of the Gaussian model. We concur with this recommendation. As convincing evidence against formal tests of (a), such as the Shapiro–Wilk, we offer a simulation study of the tails of the resulting conditional sampling distributions of the Studentized mean. We analyze the results of systematically screening all samples from normal, uniform, exponential, and Cauchy populations. This pretest does not correct the erroneous significance levels and makes matters worse for the exponential. In practice, we conclude that graphical diagnostics are better than a formal pretest. Furthermore, rank or permutation methods are recommended for exact validity in the symmetric case.  相似文献   

12.
Interest in confirmatory adaptive combined phase II/III studies with treatment selection has increased in the past few years. These studies start comparing several treatments with a control. One (or more) treatment(s) is then selected after the first stage based on the available information at an interim analysis, including interim data from the ongoing trial, external information and expert knowledge. Recruitment continues, but now only for the selected treatment(s) and the control, possibly in combination with a sample size reassessment. The final analysis of the selected treatment(s) includes the patients from both stages and is performed such that the overall Type I error rate is strictly controlled, thus providing confirmatory evidence of efficacy at the final analysis. In this paper we describe two approaches to control the Type I error rate in adaptive designs with sample size reassessment and/or treatment selection. The first method adjusts the critical value using a simulation-based approach, which incorporates the number of patients at an interim analysis, the true response rates, the treatment selection rule, etc. We discuss the underlying assumptions of simulation-based procedures and give several examples where the Type I error rate is not controlled if some of the assumptions are violated. The second method is an adaptive Bonferroni-Holm test procedure based on conditional error rates of the individual treatment-control comparisons. We show that this procedure controls the Type I error rate, even if a deviation from a pre-planned adaptation rule or the time point of such a decision is necessary.  相似文献   

13.
Some nonparametric methods have been proposed to compare survival medians. Most of them are based on the asymptotic null distribution to estimate the p-value. However, for small to moderate sample sizes, those tests may have inflated Type I error rate, which makes their application limited. In this article, we proposed a new nonparametric test that uses bootstrap to estimate the sample mean and variance of the median. Through comprehensive simulation, we show that the proposed approach can control Type I error rates well. A real data application is used to illustrate the use of the new test.  相似文献   

14.
In this article, we consider the three-factor unbalanced nested design model without the assumption of equal error variance. For the problem of testing “main effects” of the three factors, we propose a parametric bootstrap (PB) approach and compare it with the existing generalized F (GF) test. The Type I error rates of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test performs better than the generalized F-test. The PB test performs very satisfactorily even for small samples while the GF test exhibits poor Type I error properties when the number of factorial combinations or treatments goes up. It is also noted that the same tests can be used to test the significance of the random effect variance component in a three-factor mixed effects nested model under unequal error variances.  相似文献   

15.
The use of single group skewness and kurtosis critical values for the assessment of residual normality in the ANOVA model is examined. Using single group critical values gives a conservative test of residual normality in multiple group designs. As the sample size per group increases, the empirical Type I error rates for the skewness and kurtosis tests of residual normality approach a. These results supplement previous work which has focused on testing residual normality in the linear regression model.  相似文献   

16.
This article is concerned with the estimating problem of heteroscedastic partially linear errors-in-variables models. We derive the asymptotic normality for estimators of the slope parameter and the nonparametric component in the case of known error variance with stationary $\alpha $ -mixing random errors. Also, when the error variance is unknown, the asymptotic normality for the estimators of the slope parameter and the nonparametric component as well as variance function is considered under independent assumptions. Finite sample behavior of the estimators is investigated via simulations too.  相似文献   

17.
We propose a new type of multivariate statistical model that permits non‐Gaussian distributions as well as the inclusion of conditional independence assumptions specified by a directed acyclic graph. These models feature a specific factorisation of the likelihood that is based on pair‐copula constructions and hence involves only univariate distributions and bivariate copulas, of which some may be conditional. We demonstrate maximum‐likelihood estimation of the parameters of such models and compare them to various competing models from the literature. A simulation study investigates the effects of model misspecification and highlights the need for non‐Gaussian conditional independence models. The proposed methods are finally applied to modeling financial return data. The Canadian Journal of Statistics 40: 86–109; 2012 © 2012 Statistical Society of Canada  相似文献   

18.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

19.
We propose a consistent and locally efficient method of estimating the model parameters of a logistic mixed effect model with random slopes. Our approach relaxes two typical assumptions: the random effects being normally distributed, and the covariates and random effects being independent of each other. Adhering to these assumptions is particularly difficult in health studies where, in many cases, we have limited resources to design experiments and gather data in long‐term studies, while new findings from other fields might emerge, suggesting the violation of such assumptions. So it is crucial to have an estimator that is robust to such violations; then we could make better use of current data harvested using various valuable resources. Our method generalizes the framework presented in Garcia & Ma (2016) which also deals with a logistic mixed effect model but only considers a random intercept. A simulation study reveals that our proposed estimator remains consistent even when the independence and normality assumptions are violated. This contrasts favourably with the traditional maximum likelihood estimator which is likely to be inconsistent when there is dependence between the covariates and random effects. Application of this work to a study of Huntington's disease reveals that disease diagnosis can be enhanced using assessments of cognitive performance. The Canadian Journal of Statistics 47: 140–156; 2019 © 2019 Statistical Society of Canada  相似文献   

20.
Rank tests are known to be robust to outliers and violation of distributional assumptions. Two major issues besetting microarray data are violation of the normality assumption and contamination by outliers. In this article, we formulate the normal theory simultaneous tests and their aligned rank transformation (ART) analog for detecting differentially expressed genes. These tests are based on the least-squares estimates of the effects when data follow a linear model. Application of the two methods are then demonstrated on a real data set. To evaluate the performance of the aligned rank transform method with the corresponding normal theory method, data were simulated according to the characteristics of a real gene expression data. These simulated data are then used to compare the two methods with respect to their sensitivity to the distributional assumption and to outliers for controlling the family-wise Type I error rate, power, and false discovery rate. It is demonstrated that the ART generally possesses the robustness of validity property even for microarray data with small number of replications. Although these methods can be applied to more general designs, in this article the simulation study is carried out for a dye-swap design since this design is broadly used in cDNA microarray experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号