首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The inverse Gaussian (IG) distribution is widely used to model positively skewed data. An important issue is to develop a powerful goodness-of-fit test for the IG distribution. We propose and examine novel test statistics for testing the IG goodness of fit based on the density-based empirical likelihood (EL) ratio concept. To construct the test statistics, we use a new approach that employs a method of the minimization of the discrimination information loss estimator to minimize Kullback–Leibler type information. The proposed tests are shown to be consistent against wide classes of alternatives. We show that the density-based EL ratio tests are more powerful than the corresponding classical goodness-of-fit tests. The practical efficiency of the tests is illustrated by using real data examples.  相似文献   

2.
The Inverse Gaussian (IG) distribution is commonly introduced to model and examine right skewed data having positive support. When applying the IG model, it is critical to develop efficient goodness-of-fit tests. In this article, we propose a new test statistic for examining the IG goodness-of-fit based on approximating parametric likelihood ratios. The parametric likelihood ratio methodology is well-known to provide powerful likelihood ratio tests. In the nonparametric context, the classical empirical likelihood (EL) ratio method is often applied in order to efficiently approximate properties of parametric likelihoods, using an approach based on substituting empirical distribution functions for their population counterparts. The optimal parametric likelihood ratio approach is however based on density functions. We develop and analyze the EL ratio approach based on densities in order to test the IG model fit. We show that the proposed test is an improvement over the entropy-based goodness-of-fit test for IG presented by Mudholkar and Tian (2002). Theoretical support is obtained by proving consistency of the new test and an asymptotic proposition regarding the null distribution of the proposed test statistic. Monte Carlo simulations confirm the powerful properties of the proposed method. Real data examples demonstrate the applicability of the density-based EL ratio goodness-of-fit test for an IG assumption in practice.  相似文献   

3.
For the first time, we propose a five-parameter lifetime model called the McDonald Weibull distribution to extend the Weibull, exponentiated Weibull, beta Weibull and Kumaraswamy Weibull distributions, among several other models. We obtain explicit expressions for the ordinary moments, quantile and generating functions, mean deviations and moments of the order statistics. We use the method of maximum likelihood to fit the new distribution and determine the observed information matrix. We define the log-McDonald Weibull regression model for censored data. The potentiality of the new model is illustrated by means of two real data sets.  相似文献   

4.
Summary.  We establish asymptotic theory for both the maximum likelihood and the maximum modified likelihood estimators in mixture regression models. Moreover, under specific and reasonable conditions, we show that the optimal convergence rate of n −1/4 for estimating the mixing distribution is achievable for both the maximum likelihood and the maximum modified likelihood estimators. We also derive the asymptotic distributions of two log-likelihood ratio test statistics for testing homogeneity and we propose a resampling procedure for approximating the p -value. Simulation studies are conducted to investigate the empirical performance of the two test statistics. Finally, two real data sets are analysed to illustrate the application of our theoretical results.  相似文献   

5.
For the first time, we propose a new distribution so-called the beta generalized Rayleigh distribution that contains as special sub-models some well-known distributions. Expansions for the cumulative distribution and density functions are derived. We obtain explicit expressions for the moments, moment generating function, mean deviations, Bonferroni and Lorenz curves and densities of the order statistics and their moments. We estimate the parameters by maximum likelihood and provide the observed information matrix. The usefulness of the new distribution is illustrated through two real data sets that show that it is quite flexible in analyzing positive data instead of the generalized Rayleigh and Rayleigh distributions.  相似文献   

6.
Lognormal distribution is one of the popular distributions used for modelling positively skewed data, especially those encountered in economic and financial data. In this paper, we propose an efficient method for the estimation of parameters and quantiles of the three-parameter lognormal distribution, which avoids the problem of unbounded likelihood, by using statistics that are invariant to unknown location. Through a Monte Carlo simulation study, we then show that the proposed method performs well compared to other prominent methods in terms of both bias and mean-squared error. Finally, we present two illustrative examples.  相似文献   

7.
Inverse Gaussian distribution has been used widely as a model in analysing lifetime data. In this regard, estimation of parameters of two-parameter (IG2) and three-parameter inverse Gaussian (IG3) distributions based on complete and censored samples has been discussed in the literature. In this paper, we develop estimation methods based on progressively Type-II censored samples from IG3 distribution. In particular, we use the EM-algorithm, as well as some other numerical methods for determining the maximum-likelihood estimates (MLEs) of the parameters. The asymptotic variances and covariances of the MLEs from the EM-algorithm are derived by using the missing information principle. We also consider some simplified alternative estimators. The inferential methods developed are then illustrated with some numerical examples. We also discuss the interval estimation of the parameters based on the large-sample theory and examine the true coverage probabilities of these confidence intervals in case of small samples by means of Monte Carlo simulations.  相似文献   

8.
Insurance and economic data are frequently characterized by positivity, skewness, leptokurtosis, and multi-modality; although many parametric models have been used in the literature, often these peculiarities call for more flexible approaches. Here, we propose a finite mixture of contaminated gamma distributions that provides a better characterization of data. It is placed in between parametric and non-parametric density estimation and strikes a balance between these alternatives, as a large class of densities can be implemented. We adopt a maximum likelihood approach to estimate the model parameters, providing the likelihood and the expected-maximization algorithm implemented to estimate all unknown parameters. We apply our approach to an artificial dataset and to two well-known datasets as the workers compensation data and the healthcare expenditure data taken from the medical expenditure panel survey. The Value-at-Risk is evaluated and comparisons with other benchmark models are provided.  相似文献   

9.
Both approximate Bayesian computation (ABC) and composite likelihood methods are useful for Bayesian and frequentist inference, respectively, when the likelihood function is intractable. We propose to use composite likelihood score functions as summary statistics in ABC in order to obtain accurate approximations to the posterior distribution. This is motivated by the use of the score function of the full likelihood, and extended to general unbiased estimating functions in complex models. Moreover, we show that if the composite score is suitably standardised, the resulting ABC procedure is invariant to reparameterisations and automatically adjusts the curvature of the composite likelihood, and of the corresponding posterior distribution. The method is illustrated through examples with simulated data, and an application to modelling of spatial extreme rainfall data is discussed.  相似文献   

10.
Nonlinear mixed-effects models are very useful to analyze repeated measures data and are used in a variety of applications. Normal distributions for random effects and residual errors are usually assumed, but such assumptions make inferences vulnerable to the presence of outliers. In this work, we introduce an extension of a normal nonlinear mixed-effects model considering a subclass of elliptical contoured distributions for both random effects and residual errors. This elliptical subclass, the scale mixtures of normal (SMN) distributions, includes heavy-tailed multivariate distributions, such as Student-t, the contaminated normal and slash, among others, and represents an interesting alternative to outliers accommodation maintaining the elegance and simplicity of the maximum likelihood theory. We propose an exact estimation procedure to obtain the maximum likelihood estimates of the fixed-effects and variance components, using a stochastic approximation of the EM algorithm. We compare the performance of the normal and the SMN models with two real data sets.  相似文献   

11.
In this paper, we develop modified versions of the likelihood ratio test for multivariate heteroskedastic errors-in-variables regression models. The error terms are allowed to follow a multivariate distribution in the elliptical class of distributions, which has the normal distribution as a special case. We derive the Skovgaard-adjusted likelihood ratio statistics, which follow a chi-squared distribution with a high degree of accuracy. We conduct a simulation study and show that the proposed tests display superior finite sample behaviour as compared to the standard likelihood ratio test. We illustrate the usefulness of our results in applied settings using a data set from the WHO MONICA Project on cardiovascular disease.  相似文献   

12.
The two parameter inverse Gaussian (IG) distribution is often more appropriate and convenient for modelling and analysis of nonnegative right skewed data than the better known and now ubiquitous Gaussian distribution. Its convenience stems from its analytic simplicity and the striking similarities of its methodologies with those employed with the normal theory models. These, known as the G–IG analogies, include the concepts and measures of IG-symmetry, IG-skewness and IG-kurtosis, the IG-analogues of the corresponding classical notions and measures. The new IG-associated entities, although well defined and mathematically transparent, are intuitively and conceptually opaque. In this paper, we first elaborate the importance of the IG distribution and of the G–IG analogies. Then we consider the IG-related root-reciprocal IG (RRIG) distribution and introduce a physically transparent, conceptually clear notion of reciprocal symmetry (R-symmetry) and use it to explain the IG-symmetry. We study the moments and mixture properties of the R-symmetric distributions and the relationship of R-symmetry with IG-symmetry and note that RRIG distribution provides a link, in addition to Tweedie's Laplace transform link, between the Gaussian and inverse Gaussian distributions. We also give a structural characterization of the unimodal R-symmetric distributions. This work further expands the long list of G–IG analogies. Several applications including product convolution, monotonicity of power functions, peakedness and monotone limit theorems of R-symmetry are outlined.  相似文献   

13.
Abstract

Statistical distributions are very useful in describing and predicting real world phenomena. In many applied areas there is a clear need for the extended forms of the well-known distributions. Generally, the new distributions are more flexible to model real data that present a high degree of skewness and kurtosis. The choice of the best-suited statistical distribution for modeling data is very important.

In this article, we proposed an extended generalized Gompertz (EGGo) family of EGGo. Certain statistical properties of EGGo family including distribution shapes, hazard function, skewness, limit behavior, moments and order statistics are discussed. The flexibility of this family is assessed by its application to real data sets and comparison with other competing distributions. The maximum likelihood equations for estimating the parameters based on real data are given. The performances of the estimators such as maximum likelihood estimators, least squares estimators, weighted least squares estimators, Cramer-von-Mises estimators, Anderson-Darling estimators and right tailed Anderson-Darling estimators are discussed. The likelihood ratio test is derived to illustrate that the EGGo distribution is better than other nested models in fitting data set or not. We use R software for simulation in order to perform applications and test the validity of this model.  相似文献   

14.
In this paper, we investigate the construction of compromise estimators of location and scale, by averaging over several models selected among a specified large set of possible models. The weight given to each distribution is based on the profile likelihood, which leads to a notion of distance between distributions as we study the asymptotic behaviour of such estimators. The selection of the models is made in a minimax way, in order to choose distributions that are close to any possible distribution. We also present simulation results of such compromise estimators based on contaminated Gaussian and Student's t distributions.  相似文献   

15.
This article investigates the Farlie–Gumbel–Morgenstern class of models for exchangeable continuous data. We show how the model specification can account for both individual and cluster level covariates, we derive insights from comparisons with the multivariate normal distribution, and we discuss maximum likelihood inference when a sample of independent clusters of varying sizes is available. We propose a method for maximum likelihood estimation which is an alternative to direct numerical maximization of the likelihood that sometimes exhibits non-convergence problems. We describe an algorithm for generating samples from the exchangeable multivariate Farlie–Gumbel–Morgenstern distribution with any marginals, using the structural properties of the distribution. Finally, we present the results of a simulation study designed to assess the properties of the maximum likelihood estimators, and we illustrate the use of the FGM distributions with the analysis of a small data set from a developmental toxicity study.  相似文献   

16.
In this article, we shall attempt to introduce a new class of lifetime distributions, which enfolds several known distributions such as the generalized linear failure rate distribution and covers both positive as well as negative skewed data. This new four-parameter distribution allows for flexible hazard rate behavior. Indeed, the hazard rate function here can be increasing, decreasing, bathtub-shaped, or upside-down bathtub-shaped. We shall first study some basic distributional properties of the new model such as the cumulative distribution function, the density of the order statistics, their moments, and Rényi entropy. Estimation of the stress-strength parameter as an important reliability property is also studied. The maximum likelihood estimation procedure for complete and censored data and Bayesian method are used for estimating the parameters involved. Finally, application of the new model to three real datasets is illustrated to show the flexibility and potential of the new model compared to rival models.  相似文献   

17.
Zhouping Li  Yang Wei 《Statistics》2018,52(5):1128-1155
Testing the Lorenz dominance is of importance in economic and social sciences. In this article, we propose new tools to do inferences for the difference of two Lorenz curves. The asymptotic normality of the proposed smoothed nonparametric estimator is proved. We also propose a smoothed jackknife empirical likelihood (JEL) method which avoids to estimate the complicate asymptotic variance. It is proved that the proposed JEL ratio statistics converge to the standard chi-square distribution. Simulation studies and real data analysis are also conducted, and show encouraging finite-sample performance.  相似文献   

18.
Abstract

The class of transmuted distributions has received a lot of attention in the recent statistical literature. In this paper, we propose a rich family of bivariate distribution whose conditionals are transmuted distributions. The new family of distributions depends on the two baseline distributions and three dependence parameters. Apart from the general properties, we also study the distribution of the concomitance of order statistics. We study specific bivariate models. Estimation methodologies are proposed. A simulation study is conducted. The usefulness of this family is established by fitting well analyzed real life time data.  相似文献   

19.
In this paper, we propose a new three-parameter model called the exponential–Weibull distribution, which includes as special models some widely known lifetime distributions. Some mathematical properties of the proposed distribution are investigated. We derive four explicit expressions for the generalized ordinary moments and a general formula for the incomplete moments based on infinite sums of Meijer's G functions. We also obtain explicit expressions for the generating function and mean deviations. We estimate the model parameters by maximum likelihood and determine the observed information matrix. Some simulations are run to assess the performance of the maximum likelihood estimators. The flexibility of the new distribution is illustrated by means of an application to real data.  相似文献   

20.
In this article, we develop new bootstrap-based inference for noncausal autoregressions with heavy-tailed innovations. This class of models is widely used for modeling bubbles and explosive dynamics in economic and financial time series. In the noncausal, heavy-tail framework, a major drawback of asymptotic inference is that it is not feasible in practice as the relevant limiting distributions depend crucially on the (unknown) decay rate of the tails of the distribution of the innovations. In addition, even in the unrealistic case where the tail behavior is known, asymptotic inference may suffer from small-sample issues. To overcome these difficulties, we propose bootstrap inference procedures using parameter estimates obtained with the null hypothesis imposed (the so-called restricted bootstrap). We discuss three different choices of bootstrap innovations: wild bootstrap, based on Rademacher errors; permutation bootstrap; a combination of the two (“permutation wild bootstrap”). Crucially, implementation of these bootstraps do not require any a priori knowledge about the distribution of the innovations, such as the tail index or the convergence rates of the estimators. We establish sufficient conditions ensuring that, under the null hypothesis, the bootstrap statistics estimate consistently particular conditionaldistributions of the original statistics. In particular, we show that validity of the permutation bootstrap holds without any restrictions on the distribution of the innovations, while the permutation wild and the standard wild bootstraps require further assumptions such as symmetry of the innovation distribution. Extensive Monte Carlo simulations show that the finite sample performance of the proposed bootstrap tests is exceptionally good, both in terms of size and of empirical rejection probabilities under the alternative hypothesis. We conclude by applying the proposed bootstrap inference to Bitcoin/USD exchange rates and to crude oil price data. We find that indeed noncausal models with heavy-tailed innovations are able to fit the data, also in periods of bubble dynamics. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号