首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Establishing that there is no compelling evidence that some population is not normally distributed is fundamental to many statistical inferences, and numerous approaches to testing the null hypothesis of normality have been proposed. Fundamentally, the power of a test depends on which specific deviation from normality may be presented in a distribution. Knowledge of the potential nature of deviation from normality should reasonably guide the researcher's selection of testing for non-normality. In most settings, little is known aside from the data available for analysis, so that selection of a test based on general applicability is typically necessary. This research proposes and reports the power of two new tests of normality. One of the new tests is a version of the R-test that uses the L-moments, respectively, L-skewness and L-kurtosis and the other test is based on normalizing transformations of L-skewness and L-kurtosis. Both tests have high power relative to alternatives. The test based on normalized transformations, in particular, shows consistently high power and outperforms other normality tests against a variety of distributions.  相似文献   

2.
Tim Fischer  Udo Kamps 《Statistics》2013,47(1):142-158
There are several well-known mappings which transform the first r common order statistics in a sample of size n from a standard uniform distribution to a full vector of dimension r of order statistics in a sample of size r from a uniform distribution. Continuing the results reported in a previous paper by the authors, it is shown that transformations of these types do not lead to order statistics from an i.i.d. sample of random variables, in general, when being applied to order statistics from non-uniform distributions. By accepting the loss of one dimension, a structure-preserving transformation exists for power function distributions.  相似文献   

3.
Latent class models (LCMs) are used increasingly for addressing a broad variety of problems, including sparse modeling of multivariate and longitudinal data, model-based clustering, and flexible inferences on predictor effects. Typical frequentist LCMs require estimation of a single finite number of classes, which does not increase with the sample size, and have a well-known sensitivity to parametric assumptions on the distributions within a class. Bayesian nonparametric methods have been developed to allow an infinite number of classes in the general population, with the number represented in a sample increasing with sample size. In this article, we propose a new nonparametric Bayes model that allows predictors to flexibly impact the allocation to latent classes, while limiting sensitivity to parametric assumptions by allowing class-specific distributions to be unknown subject to a stochastic ordering constraint. An efficient MCMC algorithm is developed for posterior computation. The methods are validated using simulation studies and applied to the problem of ranking medical procedures in terms of the distribution of patient morbidity.  相似文献   

4.
The authors consider the problem of simultaneous transformation and variable selection for linear regression. They propose a fully Bayesian solution to the problem, which allows averaging over all models considered including transformations of the response and predictors. The authors use the Box‐Cox family of transformations to transform the response and each predictor. To deal with the change of scale induced by the transformations, the authors propose to focus on new quantities rather than the estimated regression coefficients. These quantities, referred to as generalized regression coefficients, have a similar interpretation to the usual regression coefficients on the original scale of the data, but do not depend on the transformations. This allows probabilistic statements about the size of the effect associated with each variable, on the original scale of the data. In addition to variable and transformation selection, there is also uncertainty involved in the identification of outliers in regression. Thus, the authors also propose a more robust model to account for such outliers based on a t‐distribution with unknown degrees of freedom. Parameter estimation is carried out using an efficient Markov chain Monte Carlo algorithm, which permits moves around the space of all possible models. Using three real data sets and a simulated study, the authors show that there is considerable uncertainty about variable selection, choice of transformation, and outlier identification, and that there is advantage in dealing with all three simultaneously. The Canadian Journal of Statistics 37: 361–380; 2009 © 2009 Statistical Society of Canada  相似文献   

5.
The emphasis in the literature is on normalizing transformations, despite the greater importance of the homogeneity of variance in analysis. A strategy for a choice of variance-stabilizing transformation is suggested. The relevant component of variation must be identified and, when this is not within-subject variation, a major explanatory variable must also be selected to subdivide the data. A plot of group standard deviation against group mean, or log standard deviation against log mean, may identify a simple power transformation or shifted log transformation. In other cases, within the shifted Box-Cox family of transformations, a contour plot to show the region of minimum heterogeneity defined by an appropriate index is proposed to enable an informed choice of transformation. If used in conjunction with the maximum-likelihood contour plot for the normalizing transformation, then it is possible to assess whether or not there exists a transformation that satisfies both criteria.  相似文献   

6.
The authors discuss a class of likelihood functions involving weak assumptions on data generating mechanisms. These likelihoods may be appropriate when it is difficult to propose models for the data. The properties of these likelihoods are given and it is shown how they can be computed numerically by use of the Blahut-Arimoto algorithm. The authors then show how these likelihoods can give useful inferences using a data set for which no plausible physical model is apparent. The plausibility of the inferences is enhanced by the extensive robustness analysis these likelihoods permit.  相似文献   

7.
This study investigates the Bayesian appeoach to the analysis of parired responess when the responses are categorical. Using resampling and analytical procedures, inferences for homogeneity and agreement are develped. The posterior analysis is based on the Dirichlet distribution from which repeated samples can be geneated with a random number generator. Resampling and analytical techniques are employed to make Bayesian inferences, and when it is not appropriate to use analytical procedures, resampling techniques are easily implemented. Bayesian methodoloogy is illustrated with several examples and the results show that they are exacr-small sample procedures that can easily solve inference problems for matched designs.  相似文献   

8.
We consider estimation of a class of power-transformed threshold GARCH models. When the power of the transformation is known, the asymptotic properties of the quasi-maximum likelihood estimator (QMLE) are established under mild conditions. Two sequences of least-squares estimators are also considered in the pure ARCH case, and it is shown that they can be asymptotically more accurate than the QMLE for certain power transformations. In the case where the power of the transformation has to be estimated, the asymptotic properties of the QMLE are proven under the assumption that the noise has a density. The finite-sample properties of the proposed estimators are studied by simulation.  相似文献   

9.
In this paper, we consider the influence of individual observations on inferences about the Box–Cox power transformation parameter from a Bayesian point of view. We compare Bayesian diagnostic measures with the ‘forward’ method of analysis due to Riani and Atkinson. In particular, we look at the effect of omitting observations on the inference by comparing particular choices of transformation using the conditional predictive ordinate and the k d measure of Pettit and Young. We illustrate the methods using a designed experiment. We show that a group of masked outliers can be detected using these single deletion diagnostics. Also, we show that Bayesian diagnostic measures are simpler to use to investigate the effect of observations on transformations than the forward search method.  相似文献   

10.
Various modifications of Levene's test of homogeneity of variance are proposed and evaluated, including the use of (i) Satterthwaite's method for correcting degrees of freedom, (ii) data-based power transformations, and (iii) computer simulation. Satterthwaite's correction is shown to be effective in controlling the slightly liberal behaviour of Levene's test in small samples. The use of power transformation turns out to make the test extremely liberal and is not recommended. Modifications which employ computer simulation are exact under normality, and one version, at least, is asymptotically robust of nonnormality. They also posses excellent small-sample properties.  相似文献   

11.
A common approach to analysing clinical trials with multiple outcomes is to control the probability for the trial as a whole of making at least one incorrect positive finding under any configuration of true and false null hypotheses. Popular approaches are to use Bonferroni corrections or structured approaches such as, for example, closed-test procedures. As is well known, such strategies, which control the family-wise error rate, typically reduce the type I error for some or all the tests of the various null hypotheses to below the nominal level. In consequence, there is generally a loss of power for individual tests. What is less well appreciated, perhaps, is that depending on approach and circumstances, the test-wise loss of power does not necessarily lead to a family wise loss of power. In fact, it may be possible to increase the overall power of a trial by carrying out tests on multiple outcomes without increasing the probability of making at least one type I error when all null hypotheses are true. We examine two types of problems to illustrate this. Unstructured testing problems arise typically (but not exclusively) when many outcomes are being measured. We consider the case of more than two hypotheses when a Bonferroni approach is being applied while for illustration we assume compound symmetry to hold for the correlation of all variables. Using the device of a latent variable it is easy to show that power is not reduced as the number of variables tested increases, provided that the common correlation coefficient is not too high (say less than 0.75). Afterwards, we will consider structured testing problems. Here, multiplicity problems arising from the comparison of more than two treatments, as opposed to more than one measurement, are typical. We conduct a numerical study and conclude again that power is not reduced as the number of tested variables increases.  相似文献   

12.
For estimating area‐specific parameters (quantities) in a finite population, a mixed‐model prediction approach is attractive. However, this approach strongly depends on the normality assumption of the response values, although we often encounter a non‐normal case in practice. In such a case, transforming observations to make them suitable for normality assumption is a useful tool, but the problem of selecting a suitable transformation still remains open. To overcome the difficulty, we here propose a new empirical best predicting method by using a parametric family of transformations to estimate a suitable transformation based on the data. We suggest a simple estimating method for transformation parameters based on the profile likelihood function, which achieves consistency under some conditions on transformation functions. For measuring the variability of point prediction, we construct an empirical Bayes confidence interval of the population parameter of interest. Through simulation studies, we investigate the numerical performance of the proposed methods. Finally, we apply the proposed method to synthetic income data in Spanish provinces in which the resulting estimates indicate that the commonly used log transformation would not be appropriate.  相似文献   

13.
In single-arm clinical trials with survival outcomes, the Kaplan–Meier estimator and its confidence interval are widely used to assess survival probability and median survival time. Since the asymptotic normality of the Kaplan–Meier estimator is a common result, the sample size calculation methods have not been studied in depth. An existing sample size calculation method is founded on the asymptotic normality of the Kaplan–Meier estimator using the log transformation. However, the small sample properties of the log transformed estimator are quite poor in small sample sizes (which are typical situations in single-arm trials), and the existing method uses an inappropriate standard normal approximation to calculate sample sizes. These issues can seriously influence the accuracy of results. In this paper, we propose alternative methods to determine sample sizes based on a valid standard normal approximation with several transformations that may give an accurate normal approximation even with small sample sizes. In numerical evaluations via simulations, some of the proposed methods provided more accurate results, and the empirical power of the proposed method with the arcsine square-root transformation tended to be closer to a prescribed power than the other transformations. These results were supported when methods were applied to data from three clinical trials.  相似文献   

14.
This paper discusses inferences for the parameters of a transformation model in the presence of a scalar nuisance parameter that describes the shape of the error distribution. The development is from the point of view of conditional inference and thus is an attempt to extend the classical fiducial (or structural inference) argument. For known shape parameter it is straightforward to derive a fiducial distribution of the transformation parameters from which confidence points can be obtained. For unknown shape parameter, the paper discusses a certain average of these fiducial distributions. The weights used in this averaging process are naturally induced by the action of the underlying group of transformations and correspond to a noninformative prior for the nuisance parameter. This results in a confidence distribution for the transformation parameters which in some cases has good frequentist properties. The method is illustrated by some examples.  相似文献   

15.
Quadratic forms capture multivariate information in a single number, making them useful, for example, in hypothesis testing. When a quadratic form is large and hence interesting, it might be informative to partition the quadratic form into contributions of individual variables. In this paper it is argued that meaningful partitions can be formed, though the precise partition that is determined will depend on the criterion used to select it. An intuitively reasonable criterion is proposed and the partition to which it leads is determined. The partition is based on a transformation that maximises the sum of the correlations between individual variables and the variables to which they transform under a constraint. Properties of the partition, including optimality properties, are examined. The contributions of individual variables to a quadratic form are less clear‐cut when variables are collinear, and forming new variables through rotation can lead to greater transparency. The transformation is adapted so that it has an invariance property under such rotation, whereby the assessed contributions are unchanged for variables that the rotation does not affect directly. Application of the partition to Hotelling's one‐ and two‐sample test statistics, Mahalanobis distance and discriminant analysis is described and illustrated through examples. It is shown that bootstrap confidence intervals for the contributions of individual variables to a partition are readily obtained.  相似文献   

16.
A note on the correlation structure of transformed Gaussian random fields   总被引:1,自引:0,他引:1  
Transformed Gaussian random fields can be used to model continuous time series and spatial data when the Gaussian assumption is not appropriate. The main features of these random fields are specified in a transformed scale, while for modelling and parameter interpretation it is useful to establish connections between these features and those of the random field in the original scale. This paper provides evidence that for many ‘normalizing’ transformations the correlation function of a transformed Gaussian random field is not very dependent on the transformation that is used. Hence many commonly used transformations of correlated data have little effect on the original correlation structure. The property is shown to hold for some kinds of transformed Gaussian random fields, and a statistical explanation based on the concept of parameter orthogonality is provided. The property is also illustrated using two spatial datasets and several ‘normalizing’ transformations. Some consequences of this property for modelling and inference are also discussed.  相似文献   

17.
The vec of a matrix X stacks columns of X one under another in a single column; the vech of a square matrix X does the same thing but starting each column at its diagonal element. The Jacobian of a one-to-one transformation X → Y is then ∣∣?(vecX)/?(vecY) ∣∣ when X and Y each have functionally independent elements; it is ∣∣ ?(vechX)/?(vechY) ∣∣ when X and Y are symmetric; and there is a general form for when X and Y are other patterned matrices. Kronecker product properties of vec(ABC) permit easy evaluation of this determinant in many cases. The vec and vech operators are also very convenient in developing results in multivariate statistics.  相似文献   

18.
《统计学通讯:理论与方法》2012,41(16-17):3060-3067
In this article we propose a new transformation of random variables (RVs) which characterizes the normal distribution. It allows us to transform n i.i.d. normal RVs whose mean and variance are unknown into new n ? 2 i.i.d. new normal variables with zero mean while maintaining the same unknown variance. This belongs to the class of transformations designed to reduce the number of unknown parameters or remove them altogether.

Some historical remarks concerning methods for removing parameters in the normal distribution are given and two possible applications of the new transformation are described.  相似文献   

19.
The bootstrap is a powerful non-parametric statistical technique for making probability-based inferences about a population parameter. Through a Monte-Carlo resampling simulation, bootstrapping empirically generates a statistic's entire distribution. From this simulated distribution, inferences can be made about a population parameter. Assumptions about normality are not required. In general, despite its power, bootstrapping has been used relatively infrequently in social science research, and this is particularly true for business research. This under-utilization is likely due to a combination of a general lack of understanding of the bootstrap technique and the difficulty with which it has traditionally been implemented. Researchers in the various fields of business should be familiar with this powerful statistical technique. The purpose of this paper is to explain how this technique works using Lotus 1-2-3, a software package with which business people are very familiar.  相似文献   

20.
Summary Heavy tail distributions can be generated by applying specific non-linear transformations to a Gaussian random variable. Within this work we introduce power kurtosis transformations which are essentially determined by their generator function. Examples are theH-transformation of Tukey (1960), theK-transformation of MacGillivray and Cannon (1997) and theJ-transformation of Fischer and Klein (2004).Furthermore, we derive a general condition on the generator function which guarantees that the corresponding transformation is actually tail-increasing. In this case the exponent of the power kurtosis transformation can be interpreted as a kurtosis parameter. We also prove that the transformed distributions can be ordered with respect to the partial ordering of van Zwet (1964) for symmetric distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号