首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we study the robustness of the directional mean (a.k.a. circular mean) for different families of circular distributions. We show that the directional mean is robust in the sense of finite standardized gross error sensitivity (SB-robust) for the following families: (1) mixture of two circular normal distributions, (2) mixture of wrapped normal and circular normal distributions and (3) mixture of two wrapped normal distributions. We also show that the directional mean is not SB-robust for the family of all circular normal distributions with varying concentration parameter. We define the circular trimmed mean and prove that it is SB-robust for this family. In general the property of SB-robustness of an estimator at a family of probability distributions is dependent on the choice of the dispersion measure. We introduce the concept of equivalent dispersion measures and prove that if an estimator is SB-robust for one dispersion measure then it is SB-robust for all equivalent dispersion measures. Three different dispersion measures for circular distributions are considered and their equivalence studied.  相似文献   

2.
A probability property that connects the skew normal (SN) distribution with the normal distribution is used for proposing a goodness-of-fit test for the composite null hypothesis that a random sample follows an SN distribution with unknown parameters. The random sample is transformed to approximately normal random variables, and then the Shapiro–Wilk test is used for testing normality. The implementation of this test does not require neither parametric bootstrap nor the use of tables for different values of the slant parameter. An additional test for the same problem, based on a property that relates the gamma and SN distributions, is also introduced. The results of a power study conducted by the Monte Carlo simulation show some good properties of the proposed tests in comparison to existing tests for the same problem.  相似文献   

3.
This paper investigates the roles of partial correlation and conditional correlation as measures of the conditional independence of two random variables. It first establishes a sufficient condition for the coincidence of the partial correlation with the conditional correlation. The condition is satisfied not only for multivariate normal but also for elliptical, multivariate hypergeometric, multivariate negative hypergeometric, multinomial and Dirichlet distributions. Such families of distributions are characterized by a semigroup property as a parametric family of distributions. A necessary and sufficient condition for the coincidence of the partial covariance with the conditional covariance is also derived. However, a known family of multivariate distributions which satisfies this condition cannot be found, except for the multivariate normal. The paper also shows that conditional independence has no close ties with zero partial correlation except in the case of the multivariate normal distribution; it has rather close ties to the zero conditional correlation. It shows that the equivalence between zero conditional covariance and conditional independence for normal variables is retained by any monotone transformation of each variable. The results suggest that care must be taken when using such correlations as measures of conditional independence unless the joint distribution is known to be normal. Otherwise a new concept of conditional independence may need to be introduced in place of conditional independence through zero conditional correlation or other statistics.  相似文献   

4.
The continuous extension of a discrete random variable is amongst the computational methods used for estimation of multivariate normal copula-based models with discrete margins. Its advantage is that the likelihood can be derived conveniently under the theory for copula models with continuous margins, but there has not been a clear analysis of the adequacy of this method. We investigate the asymptotic and small-sample efficiency of two variants of the method for estimating the multivariate normal copula with univariate binary, Poisson, and negative binomial regressions, and show that they lead to biased estimates for the latent correlations, and the univariate marginal parameters that are not regression coefficients. We implement a maximum simulated likelihood method, which is based on evaluating the multidimensional integrals of the likelihood with randomized quasi-Monte Carlo methods. Asymptotic and small-sample efficiency calculations show that our method is nearly as efficient as maximum likelihood for fully specified multivariate normal copula-based models. An illustrative example is given to show the use of our simulated likelihood method.  相似文献   

5.
The likelihood function from a large sample is commonly assumed to be approximately a normal density function. The literature supports, under mild conditions, an approximate normal shape about the maximum; but typically a stronger result is needed: that the normalized likelihood itself is approximately a normal density. In a transformation-parameter context, we consider the likelihood normalized relative to right-invariant measure, and in the location case under moderate conditions show that the standardized version converges almost surely to the standard normal. Also in a transformation-parameter context, we show that almost sure convergence of the normalized and standardized likelihood to a standard normal implies that the standardized distribution for conditional inference converges almost surely to a corresponding standard normal. This latter result is of immediate use for a range of estimating, testing, and confidence procedures on a conditional-inference basis.  相似文献   

6.
A stochastic volatility in mean model with correlated errors using the symmetrical class of scale mixtures of normal distributions is introduced in this article. The scale mixture of normal distributions is an attractive class of symmetric distributions that includes the normal, Student-t, slash and contaminated normal distributions as special cases, providing a robust alternative to estimation in stochastic volatility in mean models in the absence of normality. Using a Bayesian paradigm, an efficient method based on Markov chain Monte Carlo (MCMC) is developed for parameter estimation. The methods developed are applied to analyze daily stock return data from the São Paulo Stock, Mercantile & Futures Exchange index (IBOVESPA). The Bayesian predictive information criteria (BPIC) and the logarithm of the marginal likelihood are used as model selection criteria. The results reveal that the stochastic volatility in mean model with correlated errors and slash distribution provides a significant improvement in model fit for the IBOVESPA data over the usual normal model.  相似文献   

7.
The problem of setting confidence bounds on a central multivariate normal quantile is considered. It is shown that for the setting of exact confidence bounds of specified closeness to the quantile,the required minimum size of a normal sample is large and rises rapidly with the number of variates considered.  相似文献   

8.
The problem of selecting good populations out of k normal populations is considered in a Bayesian framework under exchangeable normal priors and additive loss functions. Some basic approximations to the Bayes rules are discussed. These approximations suggest that some well-known classical rules are "approximate" Bayes rules. Especially, it is shown that Gupta-type rules are extended Bayes with respect to a family of the exchangeable normal priors for any bounded and additive loss function. Furthermore, for a simple loss function, the results of a Monte Carlo comparison of Gupta-type rules and Seal-type rules are presented. They indicate that, in general, Gupta-type rules perform better than Seal-type rules  相似文献   

9.
The use of the “exact test” with the 2X2 table that records the observations obtained in a comparative trial has been widely considered to be the paradigm of statistical tests of significance. This is attributable to the fact that it is based on the theories of R.A. Fisher and as a result has acquired the sobriquet ‘exact’.The Fisherian basis of the exact test, that the marginal totals are “ancillary statistics,” and therefore provide no information respecting the configuration of the body of the table is shown to be incorrect.The exact test for the one-sided case is compared with the normal test for the nominal significance levels P=0.05 and P=0.01. It is shown by direct computation that the effective level is closer to the nominal level with the normal test than with the exact test, and that the power of the normal test is considerably larger than the power of the exact test, the increased power exceeding the change of effective level.It is concluded that the exact test should not be used in preference to the normal test.  相似文献   

10.
We investigate empirical likelihood for the additive hazards model with current status data. An empirical log-likelihood ratio for a vector or subvector of regression parameters is defined and its limiting distribution is shown to be a standard chi-squared distribution. The proposed inference procedure enables us to make empirical likelihood-based inference for the regression parameters. Finite sample performance of the proposed method is assessed in simulation studies to compare with that of a normal approximation method, it shows that the empirical likelihood method provides more accurate inference than the normal approximation method. A real data example is used for illustration.  相似文献   

11.
Maclean et al. (1976) applied a specific Box-Cox transformation to test for mixtures of distributions against a single distribution. Their null hypothesis is that a sample of n observations is from a normal distribution with unknown mean and variance after a restricted Box-Cox transformation. The alternative is that the sample is from a mixture of two normal distributions, each with unknown mean and unknown, but equal, variance after another restricted Box-Cox transformation. We developed a computer program that calculated the maximum likelihood estimates (MLEs) and likelihood ratio test (LRT) statistic for the above. Our algorithm for the calculation of the MLEs of the unknown parameters used multiple starting points to protect against convergence to a local rather than global maximum. We then simulated the distribution of the LRT for samples drawn from a normal distribution and five Box-Cox transformations of a normal distribution. The null distribution appeared to be the same for the Box-Cox transformations studied and appeared to be distributed as a chi-square random variable for samples of 25 or more. The degrees of freedom parameter appeared to be a monotonically decreasing function of the sample size. The null distribution of this LRT appeared to converge to a chi-square distribution with 2.5 degrees of freedom. We estimated the critical values for the 0.10, 0.05, and 0.01 levels of significance.  相似文献   

12.
It is generally assumed that the likelihood ratio statistic for testing the null hypothesis that data arise from a homoscedastic normal mixture distribution versus the alternative hypothesis that data arise from a heteroscedastic normal mixture distribution has an asymptotic χ 2 reference distribution with degrees of freedom equal to the difference in the number of parameters being estimated under the alternative and null models under some regularity conditions. Simulations show that the χ 2 reference distribution will give a reasonable approximation for the likelihood ratio test only when the sample size is 2000 or more and the mixture components are well separated when the restrictions suggested by Hathaway (Ann. Stat. 13:795–800, 1985) are imposed on the component variances to ensure that the likelihood is bounded under the alternative distribution. For small and medium sample sizes, parametric bootstrap tests appear to work well for determining whether data arise from a normal mixture with equal variances or a normal mixture with unequal variances.  相似文献   

13.
In this article we propose an improvement of the Kolmogorov-Smirnov test for normality. In the current implementation of the Kolmogorov-Smirnov test, given data are compared with a normal distribution that uses the sample mean and the sample variance. We propose to select the mean and variance of the normal distribution that provide the closest fit to the data. This is like shifting and stretching the reference normal distribution so that it fits the data in the best possible way. A study of the power of the proposed test indicates that the test is able to discriminate between the normal distribution and distributions such as uniform, bimodal, beta, exponential, and log-normal that are different in shape but has a relatively lower power against the student's, t-distribution that is similar in shape to the normal distribution. We also compare the performance (both in power and sensitivity to outlying observations) of the proposed test with existing normality tests such as Anderson–Darling and Shapiro–Francia.  相似文献   

14.
In this article, the operational details of the R package PoisNor that is designed for simulating multivariate data with count and continuous variables with a prespecified correlation matrix are described, and examples of some important functions are given. The data-generation mechanism is a combination of the “NORmal To Anything” principle and a recently established connection between Poisson and normal correlations. The package provides a unique and useful tool that has been lacking for generating multivariate mixed data with Poisson and normal components.  相似文献   

15.
A two sample test of likelihood ratio type is proposed, assuming normal distribution theory, for testing the hypothesis that two samples come from identical normal populations versus the alternative that the populations are normal but vary in mean value and variance with one population having a smaller mean and smaller variance than the other. The small sample and large sample distribution of the proposed statistic are derived assuming normality. Some computations are presented which show the speed of convergence of small sample critical values to their asymptotic counterparts. Comparisons of local power of the proposed test are made with several potential competing tests. Asymptotics for the test statistic are derived when underlying distributions are not necessarily normal.  相似文献   

16.
Mixture models are used in a large number of applications yet there remain difficulties with maximum likelihood estimation. For instance, the likelihood surface for finite normal mixtures often has a large number of local maximizers, some of which do not give a good representation of the underlying features of the data. In this paper we present diagnostics that can be used to check the quality of an estimated mixture distribution. Particular attention is given to normal mixture models since they frequently arise in practice. We use the diagnostic tools for finite normal mixture problems and in the nonparametric setting where the difficult problem of determining a scale parameter for a normal mixture density estimate is considered. A large sample justification for the proposed methodology will be provided and we illustrate its implementation through several examples  相似文献   

17.
It is well known that there exist multiple roots of the likelihood equations for finite normal mixture models. Selecting a consistent root for finite normal mixture models has long been a challenging problem. Simply using the root with the largest likelihood will not work because of the spurious roots. In addition, the likelihood of normal mixture models with unequal variance is unbounded and thus its maximum likelihood estimate (MLE) is not well defined. In this paper, we propose a simple root selection method for univariate normal mixture models by incorporating the idea of goodness of fit test. Our new method inherits both the consistency properties of distance estimators and the efficiency of the MLE. The new method is simple to use and its computation can be easily done using existing R packages for mixture models. In addition, the proposed root selection method is very general and can be also applied to other univariate mixture models. We demonstrate the effectiveness of the proposed method and compare it with some other existing methods through simulation studies and a real data application.  相似文献   

18.
This paper is concerned with obtaining an expression for the conditional variance-covariance matrix when the random vector is gamma scaled of a multivariate normal distribution. We show that the conditional variance is not degenerate as in the multivariate normal distribution, but depends upon a positive function for which various asymptotic properties are derived. A discussion section is included commenting on the usefulness of these results  相似文献   

19.
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.  相似文献   

20.
The paper deals with the numerical solution of the likelihood equations for incomplete data from exponential families, that is for data being a function of exponential family data. Illustrative examples especially studied in this paper concern grouped and censored normal samples and normal mixtures. A simple iterative method of solution is proposed and studied. It is shown that the sequence of iterates converges to a relative maximum of the likelihood function, and that the convergence is geometric with a factor of convergence which for large samples equals the maxi-mal relative loss of Fisher information due to the incompleteness of data. This large sample factor of convergence is illustrated diagrammaticaily for the examples mentioned above. Experiences of practical application are mentioned.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号