首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract. The zero‐inflated Poisson regression model is a special case of finite mixture models that is useful for count data containing many zeros. Typically, maximum likelihood (ML) estimation is used for fitting such models. However, it is well known that the ML estimator is highly sensitive to the presence of outliers and can become unstable when mixture components are poorly separated. In this paper, we propose an alternative robust estimation approach, robust expectation‐solution (RES) estimation. We compare the RES approach with an existing robust approach, minimum Hellinger distance (MHD) estimation. Simulation results indicate that both methods improve on ML when outliers are present and/or when the mixture components are poorly separated. However, the RES approach is more efficient in all the scenarios we considered. In addition, the RES method is shown to yield consistent and asymptotically normal estimators and, in contrast to MHD, can be applied quite generally.  相似文献   

2.
This paper presents a methodology based on transforming estimation methods in optimization problems in order to incorporate in a natural way some constraints that contain extra information not considered by standard estimation methods, with the aim of improving the quality of the parameter estimates. We include here three types of such information: bounds for the cumulative distribution function, bounds for the quantiles, and any restrictions on the parameters such as those imposed by the support of the random variable under consideration. The method is quite general and can be applied to many estimation methods such as the maximum likelihood (ML), the method of moments (MOM), the least squares, the least absolute values, and the minimax methods. The performances of the obtained estimates from several families of distributions are investigated for the ML and the MOM, using simulations. The simulation results show that for small sample sizes important gains can be achieved with respect to the case where the above information is ignored. In addition, we discuss sensitivity analysis methods for assessing the influence of observations on the proposed estimators. The method applies to both univariate and multivariate data.  相似文献   

3.
There are many methods for analyzing longitudinal ordinal response data with random dropout. These include maximum likelihood (ML), weighted estimating equations (WEEs), and multiple imputations (MI). In this article, using a Markov model where the effect of previous response on the current response is investigated as an ordinal variable, the likelihood is partitioned to simplify the use of existing software. Simulated data, generated to present a three-period longitudinal study with random dropout, are used to compare performance of ML, WEE, and MI methods in terms of standardized bias and coverage probabilities. These estimation methods are applied to a real medical data set.  相似文献   

4.
The maximum likelihood (ML) method is used to estimate the unknown Gamma regression (GR) coefficients. In the presence of multicollinearity, the variance of the ML method becomes overstated and the inference based on the ML method may not be trustworthy. To combat multicollinearity, the Liu estimator has been used. In this estimator, estimation of the Liu parameter d is an important problem. A few estimation methods are available in the literature for estimating such a parameter. This study has considered some of these methods and also proposed some new methods for estimation of the d. The Monte Carlo simulation study has been conducted to assess the performance of the proposed methods where the mean squared error (MSE) is considered as a performance criterion. Based on the Monte Carlo simulation and application results, it is shown that the Liu estimator is always superior to the ML and recommendation about which best Liu parameter should be used in the Liu estimator for the GR model is given.  相似文献   

5.
A new approach, is proposed for maximum likelihood (ML) estimation in continuous univariate distributions. The procedure is used primarily to complement the ML method which can fail in situations such as the gamma and Weibull distributions when the shape parameter is, at most, unity. The new approach provides consistent and efficient estimates for all possible values of the shape parameter. Its performance is examined via simulations. Two other, improved, general methods of ML are reported for comparative purposes. The methods are used to estimate the gamma and Weibull distributions using air pollution data from Melbourne. The new ML method is accurate when the shape parameter is less than unity and is also superior to the maximum product of spacings estimation method for the Weibull distribution.  相似文献   

6.
In many areas of application mixed linear models serve as a popular tool for analyzing highly complex data sets. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators, (RE)ML, are commonly pursued. However, it is well-known that these fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to other violations of distributional assumptions. In this article, we propose a new class of robust-efficient estimators for inference in mixed linear models. The new three-step estimation procedure provides truncated generalized least squares and variance components' estimators with hard-rejection weights adaptively computed from the data. More specifically, our data re-weighting mechanism first detects and removes within-subject outliers, then identifies and discards between-subject outliers, and finally it employs maximum likelihood procedures on the “clean” data. Theoretical efficiency and robustness properties of this approach are established.  相似文献   

7.
Maximum likelihood (ML) estimation with spatial econometric models is a long-standing problem that finds application in several areas of economic importance. The problem is particularly challenging in the presence of missing data, since there is an implied dependence between all units, irrespective of whether they are observed or not. Out of the several approaches adopted for ML estimation in this context, that of LeSage and Pace [Models for spatially dependent missing data. J Real Estate Financ Econ. 2004;29(2):233–254] stands out as one of the most commonly used with spatial econometric models due to its ability to scale with the number of units. Here, we review their algorithm, and consider several similar alternatives that are also suitable for large datasets. We compare the methods through an extensive empirical study and conclude that, while the approximate approaches are suitable for large sampling ratios, for small sampling ratios the only reliable algorithms are those that yield exact ML or restricted ML estimates.  相似文献   

8.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

9.
The generalized Pareto distribution (GPD) has been widely used in the extreme value framework. The success of the GPD when applied to real data sets depends substantially on the parameter estimation process. Several methods exist in the literature for estimating the GPD parameters. Mostly, the estimation is performed by maximum likelihood (ML). Alternatively, the probability weighted moments (PWM) and the method of moments (MOM) are often used, especially when the sample sizes are small. Although these three approaches are the most common and quite useful in many situations, their extensive use is also due to the lack of knowledge about other estimation methods. Actually, many other methods, besides the ones mentioned above, exist in the extreme value and hydrological literatures and as such are not widely known to practitioners in other areas. This paper is the first one of two papers that aim to fill in this gap. We shall extensively review some of the methods used for estimating the GPD parameters, focusing on those that can be applied in practical situations in a quite simple and straightforward manner.  相似文献   

10.
In this paper, we expand a first-order nonlinear autoregressive (AR) model with skew normal innovations. A semiparametric method is proposed to estimate a nonlinear part of model by using the conditional least squares method for parametric estimation and the nonparametric kernel approach for the AR adjustment estimation. Then computational techniques for parameter estimation are carried out by the maximum likelihood (ML) approach using Expectation-Maximization (EM) type optimization and the explicit iterative form for the ML estimators are obtained. Furthermore, in a simulation study and a real application, the accuracy of the proposed methods is verified.  相似文献   

11.
In this paper, we consider the estimation reliability in multicomponent stress-strength (MSS) model when both the stress and strengths are drawn from Topp-Leone (TL) distribution. The maximum likelihood (ML) and Bayesian methods are used in the estimation procedure. Bayesian estimates are obtained by using Lindley’s approximation and Gibbs sampling methods, since they cannot be obtained in explicit form in the context of TL. The asymptotic confidence intervals are constructed based on the ML estimators. The Bayesian credible intervals are also constructed using Gibbs sampling. The reliability estimates are compared via an extensive Monte-Carlo simulation study. Finally, a real data set is analysed for illustrative purposes.  相似文献   

12.
We propose some Stein-rule combination forecasting methods that are designed to ameliorate the estimation risk inherent in making operational the variance–covariance method for constructing combination weights. By Monte Carlo simulation, it is shown that this amelioration can be substantial in many cases. Moreover, generalized Stein-rule combinations are proposed that offer the user the opportunity to enhance combination forecasting performance when shrinking the feasible variance–covariance weights toward a fortuitous shrinkage point. In an empirical exercise, the proposed Stein-rule combinations performed well relative to competing combination methods.  相似文献   

13.
Maximum likelihood (ML) estimation of relative risks via log-binomial regression requires a restricted parameter space. Computation via non linear programming is simple to implement and has high convergence rate. We show that the optimization problem is well posed (convex domain and convex objective) and provide a variance formula along with a methodology for obtaining standard errors and prediction intervals which account for estimates on the boundary of the parameter space. We performed simulations under several scenarios already used in the literature in order to assess the performance of ML and of two other common estimation methods.  相似文献   

14.
OLS与ML:回归模型两种参数估计方法的比较研究   总被引:5,自引:0,他引:5  
最小二乘法(OLS)和最大似然法(ML)是回归模型参数估计的两种最重要的方法。 但二者有着明显的差别,本文就二者之间的有关差别进行比较。  相似文献   

15.
Sequential Monte Carlo methods (also known as particle filters and smoothers) are used for filtering and smoothing in general state-space models. These methods are based on importance sampling. In practice, it is often difficult to find a suitable proposal which allows effective importance sampling. This article develops an original particle filter and an original particle smoother which employ nonparametric importance sampling. The basic idea is to use a nonparametric estimate of the marginally optimal proposal. The proposed algorithms provide a better approximation of the filtering and smoothing distributions than standard methods. The methods’ advantage is most distinct in severely nonlinear situations. In contrast to most existing methods, they allow the use of quasi-Monte Carlo (QMC) sampling. In addition, they do not suffer from weight degeneration rendering a resampling step unnecessary. For the estimation of model parameters, an efficient on-line maximum-likelihood (ML) estimation technique is proposed which is also based on nonparametric approximations. All suggested algorithms have almost linear complexity for low-dimensional state-spaces. This is an advantage over standard smoothing and ML procedures. Particularly, all existing sequential Monte Carlo methods that incorporate QMC sampling have quadratic complexity. As an application, stochastic volatility estimation for high-frequency financial data is considered, which is of great importance in practice. The computer code is partly available as supplemental material.  相似文献   

16.
Consistency of Generalized Maximum Spacing Estimates   总被引:1,自引:0,他引:1  
General methods for the estimation of distributions can be derived from approximations of certain information measures. For example, both the maximum likelihood (ML) method and the maximum spacing (MSP) method can be obtained from approximations of the Kullback–Leibler information. The ideas behind the MSP method, whereby an estimation method for continuous univariate distributions is obtained from an approximation based on spacings of an information measure, were used by Ranneby & Ekstrom (1997) (using simple spacings) and Ekstrom (1997b) (using high order spacings) to obtain a class of methods, called generalized maximum spacing (GMSP) methods. In the present paper, GMSP methods will be shown to give consistent estimates under general conditions, comparable to those of Bahadur (1971) for the ML method, and those of Shao & Hahn (1999) for the MSP method. In particular, it will be proved that GMSP methods give consistent estimates in any family of distributions with unimodal densities, without any further conditions on the distributions.  相似文献   

17.
The composite likelihood is amongst the computational methods used for estimation of the generalized linear mixed model (GLMM) in the context of bivariate meta-analysis of diagnostic test accuracy studies. Its advantage is that the likelihood can be derived conveniently under the assumption of independence between the random effects, but there has not been a clear analysis of the merit or necessity of this method. For synthesis of diagnostic test accuracy studies, a copula mixed model has been proposed in the biostatistics literature. This general model includes the GLMM as a special case and can also allow for flexible dependence modelling, different from assuming simple linear correlation structures, normality and tail independence in the joint tails. A maximum likelihood (ML) method, which is based on evaluating the bi-dimensional integrals of the likelihood with quadrature methods, has been proposed, and in fact it eases any computational difficulty that might be caused by the double integral in the likelihood function. Both methods are thoroughly examined with extensive simulations and illustrated with data of a published meta-analysis. It is shown that the ML method has no non-convergence issues or computational difficulties and at the same time allows estimation of the dependence between study-specific sensitivity and specificity and thus prediction via summary receiver operating curves.  相似文献   

18.
Estimation of the parameters of Weibull distribution is considered using different methods of estimation based on different sampling schemes namely, Simple Random Sample (SRS), Ranked Set Sample (RSS), and Modified Ranked Set Sample (MRSS). Methods of estimation used are Maximum Likelihood (ML), Method of moments (Mom), and Bayes. Comparison between estimators is made through simulation via their Biases, Relative Efficiency (RE), and Pitman Nearness Probability (PN). Estimators based on RSS and MRSS have many advantages over those that are based on SRS.  相似文献   

19.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

20.
Prediction on the basis of censored data has an important role in many fields. This article develops a non-Bayesian two-sample prediction based on a progressive Type-II right censoring scheme. We obtain the maximum likelihood (ML) prediction in a general form for lifetime models including the Weibull distribution. The Weibull distribution is considered to obtain the ML predictor (MLP), the ML prediction estimate (MLPE), the asymptotic ML prediction interval (AMLPI), and the asymptotic predictive ML intervals of the sth-order statistic in a future random sample (Ys) drawn independently from the parent population, for an arbitrary progressive censoring scheme. To reach this aim, we present three ML prediction methods namely the numerical solution, the EM algorithm, and the approximate ML prediction. We compare the performances of the different methods of ML prediction under asymptotic normality and bootstrap methods by Monte Carlo simulation with respect to biases and mean square prediction errors (MSPEs) of the MLPs of Ys as well as coverage probabilities (CP) and average lengths (AL) of the AMLPIs. Finally, we give a numerical example and a real data sample to assess the computational comparison of these methods of the ML prediction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号