首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The authors propose a simple but general method of inference for a parametric function of the Box‐Cox‐type transformation model. Their approach is built upon the classical normal theory but takes parameter estimation into account. It quickly leads to test statistics and confidence intervals for a linear combination of scaled or unsealed regression coefficients, as well as for the survivor function and marginal effects on the median or other quantité functions of an original response. The authors show through simulations that the finite‐sample performance of their method is often superior to the delta method, and that their approach is robust to mild departures from normality of error distributions. They illustrate their approach with a numerical example.  相似文献   

2.
In a recent volume of this journal, Holden [Testing the normality assumption in the Tobit Model, J. Appl. Stat. 31 (2004) pp. 521–532] presents Monte Carlo evidence comparing several tests for departures from normality in the Tobit Model. This study adds to the work of Holden by considering another test, and several information criteria, for detecting departures from normality in the Tobit Model. The test given here is a modified likelihood ratio statistic based on a partially adaptive estimator of the Censored Regression Model using the approach of Caudill [A partially adaptive estimator for the Censored Regression Model based on a mixture of normal distributions, Working Paper, Department of Economics, Auburn University, 2007]. The information criteria examined include the Akaike’s Information Criterion (AIC), the Consistent AIC (CAIC), the Bayesian information criterion (BIC), and the Akaike’s BIC (ABIC). In terms of fewest ‘rejections’ of a true null, the best performance is exhibited by the CAIC and the BIC, although, like some of the statistics examined by Holden, there are computational difficulties with each.  相似文献   

3.
In epidemiological surveillance it is important that any unusual increase of reported cases be detected as rapidly as possible. Reliable forecasting based on a suitable time series model for an epidemiological indicator is necessary for estimating the expected non-epidemic indicator and to elaborate an alert threshold. Time series analyses of acute diseases often use Gaussian autoregressive integrated moving average models. However, these approaches can be adversely affected by departures from the true underlying distribution. The objective of this paper is to introduce a bootstrap procedure for obtaining prediction intervals in linear models in order to avoid the normality assumption. We present a Monte Carlo study comparing the finite sample properties of bootstrap prediction intervals with those of alternative methods. Finally, we illustrate the performance of the proposed method with a meningococcal disease incidence series.  相似文献   

4.
The effects of non-normality on type-I and type-II errors in a one-way random model are investigated for moderate departures from normality. It is found that the probabilities of both errors are more sensitive to the kurtosis of between group effects than that of within group effects.  相似文献   

5.
Sample kurtosis is a member of the large class of absolute moment tests of normality. We compare kurtosis to other absolute moment tests to determine which are the most powerful at detecting long‐tailed symmetric departures from normality for large samples. The large sample power of the tests is calculated using Geary's (1947) approximations of the moments of the test statistics. Using the system of Gram-Charlier symmetric distributions as alternatives, the most power is obtained using a moment in the range 2.5 ‐ 3.5.  相似文献   

6.
Linear mixed models (LMM) are frequently used to analyze repeated measures data, because they are more flexible to modelling the correlation within-subject, often present in this type of data. The most popular LMM for continuous responses assumes that both the random effects and the within-subjects errors are normally distributed, which can be an unrealistic assumption, obscuring important features of the variations present within and among the units (or groups). This work presents skew-normal liner mixed models (SNLMM) that relax the normality assumption by using a multivariate skew-normal distribution, which includes the normal ones as a special case and provides robust estimation in mixed models. The MCMC scheme is derived and the results of a simulation study are provided demonstrating that standard information criteria may be used to detect departures from normality. The procedures are illustrated using a real data set from a cholesterol study.  相似文献   

7.
Tests for normality can be divided into two groups - those based upon a function of the empirical distribution function and those based upon a function of the original observations. The latter group of statistics test spherical symmetry and not necessarily normality. If the distribution is completely specified then the first group can be used to test for ‘spherical’ normality. However, if the distribution is incompletely specified and F‘‘xi - x’/s’ is used these test statistics also test sphericity rather than normality. A Monte Carlo study was conducted for the completely specified case, to investigate the sensitivity of the distance tests to departures from normality when the alternative distributions are non-normal spherically symmetric laws. A “new” test statistic is proposed for testing a completely specified normal distribution  相似文献   

8.
An efficient method for incorporating incomplete prior information in regression analysis was developed by Theil [1963]. In this paper we take up the estimator of coefficients given by this procedure and study its robustness to departures from normality of prior estimators of coefficients. The use of incomplete or biased prior information in regression analysis is also considered and a new estimator for the regression coefficient is suggested.  相似文献   

9.
We study objective Bayesian inference for linear regression models with residual errors distributed according to the class of two-piece scale mixtures of normal distributions. These models allow for capturing departures from the usual assumption of normality of the errors in terms of heavy tails, asymmetry, and certain types of heteroscedasticity. We propose a general non-informative, scale-invariant, prior structure and provide sufficient conditions for the propriety of the posterior distribution of the model parameters, which cover cases when the response variables are censored. These results allow us to apply the proposed models in the context of survival analysis. This paper represents an extension to the Bayesian framework of the models proposed in [16]. We present a simulation study that shows good frequentist properties of the posterior credible intervals as well as point estimators associated to the proposed priors. We illustrate the performance of these models with real data in the context of survival analysis of cancer patients.  相似文献   

10.
In this note we use the class of exponential power distributions to assess the robustness to non-normality of the test for outliers based on the maximum absolute studentized residual. We find that the significance levels can be quite markedly affected by even moderate departures from normality of the error distribution in a regression model when the sample size is moderately large.  相似文献   

11.
Leptokurtosis and skewness characterize the distributions of the returns for many financial instruments traded in security markets. These departures from normality can adversely affect the efficiency of least squares estimates of the β's in the single index or market model. The proposed new partially adaptive estimation techniques accommodate skewed and fat tailed distributions. The empirical investigation, which is the first application of this procedure in regression models, reveals that both skewness and kurtosis can affect β estimates.  相似文献   

12.
A new test for autocorrelation in a general regression model under departures from the assumption of normality is derived by applying a beta distribution and bootstrap approximation, Critical values of the test, can be computed for each given design matrix, irrespective of the form of the underlying error distribution, Monte Carlo simulations are conducted in order to illustrate the performance of the test. Among others, it. is found that the suggested test is more robust and far more powerful than existing nonparametric tests.  相似文献   

13.
The family of symmetric generalized exponential power (GEP) densities offers a wide range of tail behaviors, which may be exponential, polynomial, and/or logarithmic. In this article, a test of normality based on Rao's score statistic and this family of GEP alternatives is proposed. This test is tailored to detect departures from normality in the tails of the distribution. The main interest of this approach is that it provides a test with a large family of symmetric alternatives having non-normal tails. In addition, the test's statistic consists of a combination of three quantities that can be interpreted as new measures of tail thickness. In a Monte-Carlo simulation study, the proposed test is shown to perform well in terms of power when compared to its competitors.  相似文献   

14.
Real world data often fail to meet the underlying assumption of population normality. The Rank Transformation (RT) procedure has been recommended as an alternative to the parametric factorial analysis of covariance (ANCOVA). The purpose of this study was to compare the Type I error and power properties of the RT ANCOVA to the parametric procedure in the context of a completely randomized balanced 3 × 4 factorial layout with one covariate. This study was concerned with tests of homogeneity of regression coefficients and interaction under conditional (non)normality. Both procedures displayed erratic Type I error rates for the test of homogeneity of regression coefficients under conditional nonnormality. With all parametric assumptions valid, the simulation results demonstrated that the RT ANCOVA failed as a test for either homogeneity of regression coefficients or interaction due to severe Type I error inflation. The error inflation was most severe when departures from conditional normality were extreme. Also associated with the RT procedure was a loss of power. It is recommended that the RT procedure not be used as an alternative to factorial ANCOVA despite its encouragement from SAS, IMSL, and other respected sources.  相似文献   

15.
In this paper we assess the sensitivity of the multivariate extreme deviate test for a single multivariate outlier to non-normality in the form of heavy tails. We find that the empirical significance levels can be markedly affected by even modest departures from multivariate normality. The effects are particularly severe when the sample size is large relative to the dimension. Finally, by way of example we demonstrate that certain graphical techniques may prove useful in identifying the source of rejection for the multivariate extreme deviate test.  相似文献   

16.
We develop two tests sensitive to various departures from composite goodness-of-fit hypothesis of normality. The tests are based on the sums of squares of some components naturally arising in decomposition of the Shapiro–Wilk-type statistic. Each component itself has diagnostic properties. The numbers of squared components in sums are determined via some novel selection rules based on the data. The new solutions prove to be effective tools in detecting a broad spectrum of sources of non-Gaussianity. We also discuss two variants of the new tests adjusted to verification of simple goodness-of-fit hypothesis of normality. These variants also compare well to popular competitors.  相似文献   

17.
Multivariate statistical analysis procedures often require data to be multivariate normally distributed. Many tests have been developed to verify if a sample could indeed have come from a normally distributed population. These tests do not all share the same sensitivity for detecting departures from normality, and thus a choice of test is of central importance. This study investigates through simulated data the power of those tests for multivariate normality implemented in the statistic software R and pits them against the variant of testing each marginal distribution for normality. The results of testing two-dimensional data at a level of significance α=5% showed that almost one-third of those tests implemented in R do not have a type I error below this. Other tests outperformed the naive variant in terms of power even when the marginals were not normally distributed. Even though no test was consistently better than all alternatives with every alternative distribution, the energy-statistic test always showed relatively good power across all tested sample sizes.  相似文献   

18.
Exact confidence interval estimation for accelerated life regression models with censored smallest extreme value (or Weibull) data is often impractical. This paper evaluates the accuracy of approximate confidence intervals based on the asymptotic normality of the maximum likelihood estimator, the asymptotic X2distribution of the likelihood ratio statistic, mean and variance correction to the likelihood ratio statistic, and the so-called Bartlett correction to the likelihood ratio statistic. The Monte Carlo evaluations under various degrees of time censoring show that uncorrected likelihood ratio intervals are very accurate in situations with heavy censoring. The benefits of mean and variance correction to the likelihood ratio statistic are only realized with light or no censoring. Bartlett correction tends to result in conservative intervals. Intervals based on the asymptotic normality of maximum likelihood estimators are anticonservative and should be used with much caution.  相似文献   

19.
summary In this paper we derive the predictive density function of a future observation under the assumption of Edgeworth-type non-normal prior distribution for the unknown mean of a normal population. Fixed size single sample and sequential sampling inspection plans, in a decisive prediction framework, are examined for their sensitivity to departures from normality of the prior distribution. Numerical illustrations indicate that the decision to market the remaining items of a given lot for a fixed size plan may be sensitive to the presence of skewness or kurtosis in the prior distribution. However, Bayes'decision based on the sequential plan may not change though expected gains may change with variation in the non-normality of the prior distribution.  相似文献   

20.
Summary.  Problems of the analysis of data with incomplete observations are all too familiar in statistics. They are doubly difficult if we are also uncertain about the choice of model. We propose a general formulation for the discussion of such problems and develop approximations to the resulting bias of maximum likelihood estimates on the assumption that model departures are small. Loss of efficiency in parameter estimation due to incompleteness in the data has a dual interpretation: the increase in variance when an assumed model is correct; the bias in estimation when the model is incorrect. Examples include non-ignorable missing data, hidden confounders in observational studies and publication bias in meta-analysis. Doubling variances before calculating confidence intervals or test statistics is suggested as a crude way of addressing the possibility of undetectably small departures from the model. The problem of assessing the risk of lung cancer from passive smoking is used as a motivating example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号