首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
We consider the problem of making inferences about extreme values from a sample. The underlying model distribution is the generalized extreme-value (GEV) distribution, and our interest is in estimating the parameters and quantiles of the distribution robustly. In doing this we find estimates for the GEV parameters based on that part of the data which is well fitted by a GEV distribution. The robust procedure will assign weights between 0 and 1 to each data point. A weight near 0 indicates that the data point is not well modelled by the GEV distribution which fits the points with weights at or near 1. On the basis of these weights we are able to assess the validity of a GEV model for our data. It is important that the observations with low weights be carefully assessed to determine whether diey are valid observations or not. If they are, we must examine whether our data could be generated by a mixture of GEV distributions or whether some other process is involved in generating the data. This process will require careful consideration of die subject matter area which led to the data. The robust estimation techniques are based on optimal B-robust estimates. Their performance is compared to the probability-weighted moment estimates of Hosking et al. (1985) in both simulated and real data.  相似文献   

2.
Extreme value theory models have found applications in myriad fields. Maximum likelihood (ML) is attractive for fitting the models because it is statistically efficient and flexible. However, in small samples, ML is biased to O(N?1) and some classical hypothesis tests suffer from size distortions. This paper derives the analytical Cox–Snell bias correction for the generalized extreme value (GEV) model, and for the model's extension to multiple order statistics (GEVr). Using simulations, the paper compares this correction to bootstrap-based bias corrections, for the generalized Pareto, GEV, and GEVr. It then compares eight approaches to inference with respect to primary parameters and extreme quantiles, some including corrections. The Cox–Snell correction is not markedly superior to bootstrap-based correction. The likelihood ratio test appears most accurately sized. The methods are applied to the distribution of geomagnetic storms.  相似文献   

3.
Aiming to avoid the sensitivity in the parameters estimation due to atypical observations or skewness, we develop asymmetric nonlinear regression models with mixed-effects, which provide alternatives to the use of normal distribution and other symmetric distributions. Nonlinear models with mixed-effects are explored in several areas of knowledge, especially when data are correlated, such as longitudinal data, repeated measures and multilevel data, in particular, for their flexibility in dealing with measures of areas such as economics and pharmacokinetics. The random components of the present model are assumed to follow distributions that belong to scale mixtures of skew-normal (SMSN) distribution family, that encompasses distributions with light and heavy tails, such as skew-normal, skew-Student-t, skew-contaminated normal and skew-slash, as well as symmetrical versions of these distributions. For the parameters estimation we obtain a numerical solution via the EM algorithm and its extensions, and the Newton-Raphson algorithm. An application with pharmacokinetic data shows the superiority of the proposed models, for which the skew-contaminated normal distribution has shown to be the most adequate distribution. A brief simulation study points to good properties of the parameter vector estimators obtained by the maximum likelihood method.  相似文献   

4.
The popular generalized extreme value (GEV) distribution has not been a flexible model for extreme values in many areas. We propose a generalization – referred to as the Kumaraswamy GEV distribution – and provide a comprehensive treatment of its mathematical properties. We estimate its parameters by the method of maximum likelihood and provide the observed information matrix. An application to some real data illustrates flexibility of the new model. Finally, some bivariate generalizations of the model are proposed.  相似文献   

5.
A pivotal characteristic of credit defaults that is ignored by most credit scoring models is the rarity of the event. The most widely used model to estimate the probability of default is the logistic regression model. Since the dependent variable represents a rare event, the logistic regression model shows relevant drawbacks, for example, underestimation of the default probability, which could be very risky for banks. In order to overcome these drawbacks, we propose the generalized extreme value regression model. In particular, in a generalized linear model (GLM) with the binary-dependent variable we suggest the quantile function of the GEV distribution as link function, so our attention is focused on the tail of the response curve for values close to one. The estimation procedure used is the maximum-likelihood method. This model accommodates skewness and it presents a generalisation of GLMs with complementary log–log link function. We analyse its performance by simulation studies. Finally, we apply the proposed model to empirical data on Italian small and medium enterprises.  相似文献   

6.
We show that the mean-model parameter is always orthogonal to the error distribution in generalized linear models. Thus, the maximum likelihood estimator of the mean-model parameter will be asymptotically efficient regardless of whether the error distribution is known completely, known up to a finite vector of parameters, or left completely unspecified, in which case the likelihood is taken to be an appropriate semiparametric likelihood. Moreover, the maximum likelihood estimator of the mean-model parameter will be asymptotically independent of the maximum likelihood estimator of the error distribution. This generalizes some well-known results for the special cases of normal, gamma, and multinomial regression models, and, perhaps more interestingly, suggests that asymptotically efficient estimation and inferences can always be obtained if the error distribution is non parametrically estimated along with the mean. In contrast, estimation and inferences using misspecified error distributions or variance functions are generally not efficient.  相似文献   

7.
The four-parameter kappa distribution (K4D) is a generalized form of some commonly used distributions such as generalized logistic, generalized Pareto, generalized Gumbel, and generalized extreme value (GEV) distributions. Owing to its flexibility, the K4D is widely applied in modeling in several fields such as hydrology and climatic change. For the estimation of the four parameters, the maximum likelihood approach and the method of L-moments are usually employed. The L-moment estimator (LME) method works well for some parameter spaces, with up to a moderate sample size, but it is sometimes not feasible in terms of computing the appropriate estimates. Meanwhile, using the maximum likelihood estimator (MLE) with small sample sizes shows substantially poor performance in terms of a large variance of the estimator. We therefore propose a maximum penalized likelihood estimation (MPLE) of K4D by adjusting the existing penalty functions that restrict the parameter space. Eighteen combinations of penalties for two shape parameters are considered and compared. The MPLE retains modeling flexibility and large sample optimality while also improving on small sample properties. The properties of the proposed estimator are verified through a Monte Carlo simulation, and an application case is demonstrated taking Thailand’s annual maximum temperature data.  相似文献   

8.
The authors consider the estimation of linear functions of a multivariate parameter under orthant restrictions. These restrictions are considered both for location models and for the Poisson distribution. For these models, situations are characterized for which the restricted maximum likelihood estimator dominates the unrestricted one for the estimation of any linear function of the parameter. The results obtained point directly to the importance of the dimension of the parameter space, the central direction of the cone and its vertex in these cases. Special attention is given to examples, such as the one‐way analysis of variance, where the estimation of individual interesting linear functions of the parameter, as the coordinates and the differences between them, is also treated.  相似文献   

9.
Many lifetime distribution models have successfully served as population models for risk analysis and reliability mechanisms. The Kumaraswamy distribution is one of these distributions which is particularly useful to many natural phenomena whose outcomes have lower and upper bounds or bounded outcomes in the biomedical and epidemiological research. This article studies point estimation and interval estimation for the Kumaraswamy distribution. The inverse estimators (IEs) for the parameters of the Kumaraswamy distribution are derived. Numerical comparisons with maximum likelihood estimation and biased-corrected methods clearly indicate the proposed IEs are promising. Confidence intervals for the parameters and reliability characteristics of interest are constructed using pivotal or generalized pivotal quantities. Then, the results are extended to the stress–strength model involving two Kumaraswamy populations with different parameter values. Construction of confidence intervals for the stress–strength reliability is derived. Extensive simulations are used to demonstrate the performance of confidence intervals constructed using generalized pivotal quantities.  相似文献   

10.
This paper contributes to the problem of estimation of state space model parameters by proposing estimators for the mean, the autoregressive parameters and the noise variances which, contrarily to maximum likelihood, may be calculated without assuming any specific distribution for the errors. The estimators suggested widen the scope of the application of the generalized method of moments to some heteroscedastic models, as in the case of state-space models with varying coefficients, and give sufficient conditions for their consistency. The paper includes a simulation study comparing the proposed estimators with maximum likelihood estimators. Finally, these methods are applied to the calibration of the meteorological radar and estimation of area rainfall.  相似文献   

11.
Several bivariate beta distributions have been proposed in the literature. In particular, Olkin and Liu [A bivariate beta distribution. Statist Probab Lett. 2003;62(4):407–412] proposed a 3 parameter bivariate beta model which Arnold and Ng [Flexible bivariate beta distributions. J Multivariate Anal. 2011;102(8):1194–1202] extend to 5 and 8 parameter models. The 3 parameter model allows for only positive correlation, while the latter models can accommodate both positive and negative correlation. However, these come at the expense of a density that is mathematically intractable. The focus of this research is on Bayesian estimation for the 5 and 8 parameter models. Since the likelihood does not exist in closed form, we apply approximate Bayesian computation, a likelihood free approach. Simulation studies have been carried out for the 5 and 8 parameter cases under various priors and tolerance levels. We apply the 5 parameter model to a real data set by allowing the model to serve as a prior to correlated proportions of a bivariate beta binomial model. Results and comparisons are then discussed.  相似文献   

12.
We propose bivariate Weibull regression model with heterogeneity (frailty or random effect) which is generated by Weibull distribution. We assume that the bivariate survival data follow bivariate Weibull of Hanagal (Econ Qual Control 19:83–90, 2004). There are some interesting situations like survival times in genetic epidemiology, dental implants of patients and twin births (both monozygotic and dizygotic) where genetic behavior (which is unknown and random) of patients follows a known frailty distribution. These are the situations which motivate to study this particular model. We propose two-stage maximum likelihood estimation for hierarchical likelihood in the proposed model. We present a small simulation study to compare these estimates with the true value of the parameters and it is observed that these estimates are very close to the true values of the parameters.  相似文献   

13.
Some work has been done in the past on the estimation for the three-parameter gamma distribution based on complete and censored samples. In this paper, we develop estimation methods based on progressively Type-II censored samples from a three-parameter gamma distribution. In particular, we develop some iterative methods for the determination of the maximum likelihood estimates (MLEs) of all three parameters. It is shown that the proposed iterative scheme converges to the MLEs. In this context, we propose another method of estimation which is based on missing information principle and moment estimators. Simple alternatives to the above two methods are also suggested. The proposed estimation methods are then illustrated with a numerical example. We also consider the interval estimation based on large-sample theory and examine the actual coverage probabilities of these confidence intervals in case of small samples using a Monte Carlo simulation study.  相似文献   

14.
In this paper, a unified maximum marginal likelihood estimation procedure is proposed for the analysis of right censored data using general partially linear varying-coefficient transformation models (GPLVCTM), which are flexible enough to include many survival models as its special cases. Unknown functional coefficients in the models are approximated by cubic B-spline polynomial. We estimate B-spline coefficients and regression parameters by maximizing marginal likelihood function. One advantage of this procedure is that it is free of both baseline and censoring distribution. Through simulation studies and a real data application (VA data from the Veteran's Administration Lung Cancer Study Clinical Trial), we illustrate that the proposed estimation procedure is accurate, stable and practical.  相似文献   

15.
In this paper we use the total time on test transformation to establish a method for construction of parametric models of lifetime distributions having bathtub-shaped failure rate. We study a particular model which is simple compared to the other existing models. We derive expressions for moments and quantiles and treat estimation methods. Particularly, the maximum likelihood method is studied. Consistency proofs are given.  相似文献   

16.
Two classes of semiparametric and nonparametric mixture models are defined to represent general kinds of prior information. For these models the nonparametric maximum likelihood estimator (NPMLE) of an unknown probability distribution is derived and is shown to be consistent and relative efficient. Linear functionals are used for the estimation of parameters. Their consistency is proved, the gain of efficiency is derived and asymptotical distributions are given.  相似文献   

17.
Data sets with excess zeroes are frequently analyzed in many disciplines. A common framework used to analyze such data is the zero-inflated (ZI) regression model. It mixes a degenerate distribution with point mass at zero with a non-degenerate distribution. The estimates from ZI models quantify the effects of covariates on the means of latent random variables, which are often not the quantities of primary interest. Recently, marginal zero-inflated Poisson (MZIP; Long et al. [A marginalized zero-inflated Poisson regression model with overall exposure effects. Stat. Med. 33 (2014), pp. 5151–5165]) and negative binomial (MZINB; Preisser et al., 2016) models have been introduced that model the mean response directly. These models yield covariate effects that have simple interpretations that are, for many applications, more appealing than those available from ZI regression. This paper outlines a general framework for marginal zero-inflated models where the latent distribution is a member of the exponential dispersion family, focusing on common distributions for count data. In particular, our discussion includes the marginal zero-inflated binomial (MZIB) model, which has not been discussed previously. The details of maximum likelihood estimation via the EM algorithm are presented and the properties of the estimators as well as Wald and likelihood ratio-based inference are examined via simulation. Two examples presented illustrate the advantages of MZIP, MZINB, and MZIB models for practical data analysis.  相似文献   

18.
Parametric incomplete data models defined by ordinary differential equations (ODEs) are widely used in biostatistics to describe biological processes accurately. Their parameters are estimated on approximate models, whose regression functions are evaluated by a numerical integration method. Accurate and efficient estimations of these parameters are critical issues. This paper proposes parameter estimation methods involving either a stochastic approximation EM algorithm (SAEM) in the maximum likelihood estimation, or a Gibbs sampler in the Bayesian approach. Both algorithms involve the simulation of non-observed data with conditional distributions using Hastings–Metropolis (H–M) algorithms. A modified H–M algorithm, including an original local linearization scheme to solve the ODEs, is proposed to reduce the computational time significantly. The convergence on the approximate model of all these algorithms is proved. The errors induced by the numerical solving method on the conditional distribution, the likelihood and the posterior distribution are bounded. The Bayesian and maximum likelihood estimation methods are illustrated on a simulated pharmacokinetic nonlinear mixed-effects model defined by an ODE. Simulation results illustrate the ability of these algorithms to provide accurate estimates.  相似文献   

19.
The paper deals with discrete-time regression models to analyze multistate—multiepisode models for event history data or failure time data collected in follow-up studies, retrospective studies, or longitudinal panels. The models are applicable if the events are not dated exactly but only a time interval is recorded. The models include individual specific parameters to account for unobserved heterogeneity. The explantory variables may be time-varying and random with distributions depending on the observed history of the process. Different estimation procedures are considered: Estimation of structural as well as individual specific parameters by maximization of a joint likelihood function, estimation of the structural parameters by maximization of a conditional likelihood function conditioning on a set of sufficient statistics for the individual specific parameters, and estimation of the structural parameters by maximization of a marginal likelihood function assuming that the individual specific parameters follow a distribution. The advantages and limitations of the different approaches are discussed.  相似文献   

20.
In this paper we deal with robust inference in heteroscedastic measurement error models. Rather than the normal distribution, we postulate a Student t distribution for the observed variables. Maximum likelihood estimates are computed numerically. Consistent estimation of the asymptotic covariance matrices of the maximum likelihood and generalized least squares estimators is also discussed. Three test statistics are proposed for testing hypotheses of interest with the asymptotic chi-square distribution which guarantees correct asymptotic significance levels. Results of simulations and an application to a real data set are also reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号