首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The paper shows that many estimation methods, including ML, moments, even-points, empirical c.f. and minimum chi-square, can be regarded as scoring procedures using weighted sums of the discrepancies between observed and expected frequencies The nature of the weights is investigated for many classes of distributions; the study of approximations to the weights clarifies the relationships between estimation methods, and also leads to useful formulae for initial values for ML iteration.  相似文献   

2.
The maximum likelihood (ML) method is used to estimate the unknown Gamma regression (GR) coefficients. In the presence of multicollinearity, the variance of the ML method becomes overstated and the inference based on the ML method may not be trustworthy. To combat multicollinearity, the Liu estimator has been used. In this estimator, estimation of the Liu parameter d is an important problem. A few estimation methods are available in the literature for estimating such a parameter. This study has considered some of these methods and also proposed some new methods for estimation of the d. The Monte Carlo simulation study has been conducted to assess the performance of the proposed methods where the mean squared error (MSE) is considered as a performance criterion. Based on the Monte Carlo simulation and application results, it is shown that the Liu estimator is always superior to the ML and recommendation about which best Liu parameter should be used in the Liu estimator for the GR model is given.  相似文献   

3.
Abstract. The zero‐inflated Poisson regression model is a special case of finite mixture models that is useful for count data containing many zeros. Typically, maximum likelihood (ML) estimation is used for fitting such models. However, it is well known that the ML estimator is highly sensitive to the presence of outliers and can become unstable when mixture components are poorly separated. In this paper, we propose an alternative robust estimation approach, robust expectation‐solution (RES) estimation. We compare the RES approach with an existing robust approach, minimum Hellinger distance (MHD) estimation. Simulation results indicate that both methods improve on ML when outliers are present and/or when the mixture components are poorly separated. However, the RES approach is more efficient in all the scenarios we considered. In addition, the RES method is shown to yield consistent and asymptotically normal estimators and, in contrast to MHD, can be applied quite generally.  相似文献   

4.
The seemingly unrelated regression model is viewed in the context of repeated measures analysis. Regression parameters and the variance-covariance matrix of the seemingly unrelated regression model can be estimated by using two-stage Aitken estimation. The first stage is to obtain a consistent estimator of the variance-covariance matrix. The second stage uses this matrix to obtain the generalized least squares estimators of the regression parameters. The maximum likelihood (ML) estimators of the regression parameters can be obtained by performing the two-stage estimation iteratively. The iterative two-stage estimation procedure is shown to be equivalent to the EM algorithm (Dempster, Laird, and Rubin, 1977) proposed by Jennrich and Schluchter (1986) and Laird, Lange, and Stram (1987) for repeated measures data. The equivalence of the iterative two-stage estimator and the ML estimator has been previously demonstrated empirically in a Monte Carlo study by Kmenta and Gilbert (1968). It does not appear to be widely known that the two estimators are equivalent theoretically. This paper demonstrates this equivalence.  相似文献   

5.
This article analyzes the effects of multicollienarity on the maximum likelihood (ML) estimator for the Tobit regression model. Furthermore, a ridge regression (RR) estimator is proposed since the mean squared error (MSE) of ML becomes inflated when the regressors are collinear. To investigate the performance of the traditional ML and the RR approaches we use Monte Carlo simulations where the MSE is used as performance criteria. The simulated results indicate that the RR approach should always be preferred to the ML estimation method.  相似文献   

6.
The family of power series cure rate models provides a flexible modeling framework for survival data of populations with a cure fraction. In this work, we present a simplified estimation procedure for the maximum likelihood (ML) approach. ML estimates are obtained via the expectation-maximization (EM) algorithm where the expectation step involves computation of the expected number of concurrent causes for each individual. It has the big advantage that the maximization step can be decomposed into separate maximizations of two lower-dimensional functions of the regression and survival distribution parameters, respectively. Two simulation studies are performed: the first to investigate the accuracy of the estimation procedure for different numbers of covariates and the second to compare our proposal with the direct maximization of the observed log-likelihood function. Finally, we illustrate the technique for parameter estimation on a dataset of survival times for patients with malignant melanoma.  相似文献   

7.
Including time-varying covariates is a popular extension to the Cox model and a suitable approach for dealing with non-proportional hazards. However, partial likelihood (PL) estimation of this model has three shortcomings: (i) estimated regression coefficients can be less accurate in small samples with heavy censoring; (ii) the baseline hazard is not directly estimated and (iii) a covariance matrix for both the regression coefficients and the baseline hazard is not easily produced.We address these by developing a maximum likelihood (ML) approach to jointly estimate regression coefficients and baseline hazard using a constrained optimisation ensuring the latter''s non-negativity. We demonstrate asymptotic properties of these estimates and show via simulation their increased accuracy compared to PL estimates in small samples and show our method produces smoother baseline hazard estimates than the Breslow estimator.Finally, we apply our method to two examples, including an important real-world financial example to estimate time to default for retail home loans. We demonstrate using our ML estimate for the baseline hazard can give much clearer corroboratory evidence of the ‘humped hazard’, whereby the risk of loan default rises to a peak and then later falls.  相似文献   

8.
In this paper, we present a statistical inference procedure for the step-stress accelerated life testing (SSALT) model with Weibull failure time distribution and interval censoring via the formulation of generalized linear model (GLM). The likelihood function of an interval censored SSALT is in general too complicated to obtain analytical results. However, by transforming the failure time to an exponential distribution and using a binomial random variable for failure counts occurred in inspection intervals, a GLM formulation with a complementary log-log link function can be constructed. The estimations of the regression coefficients used for the Weibull scale parameter are obtained through the iterative weighted least square (IWLS) method, and the shape parameter is updated by a direct maximum likelihood (ML) estimation. The confidence intervals for these parameters are estimated through bootstrapping. The application of the proposed GLM approach is demonstrated by an industrial example.  相似文献   

9.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

10.
Maximum likelihood (ML) estimation of relative risks via log-binomial regression requires a restricted parameter space. Computation via non linear programming is simple to implement and has high convergence rate. We show that the optimization problem is well posed (convex domain and convex objective) and provide a variance formula along with a methodology for obtaining standard errors and prediction intervals which account for estimates on the boundary of the parameter space. We performed simulations under several scenarios already used in the literature in order to assess the performance of ML and of two other common estimation methods.  相似文献   

11.
Probability plots are often used to estimate the parameters of distributions. Using large sample properties of the empirical distribution function and order statistics, weights to stabilize the variance in order to perform weighted least squares regression are derived. Weighted least squares regression is then applied to the estimation of the parameters of the Weibull, and the Gumbel distribution. The weights are independent of the parameters of the distributions considered. Monte Carlo simulation shows that the weighted least-squares estimators outperform the usual least-squares estimators totally, especially in small samples.  相似文献   

12.
13.
Tianqing Liu 《Statistics》2016,50(1):89-113
This paper proposes an empirical likelihood-based weighted (ELW) quantile regression approach for estimating the conditional quantiles when some covariates are missing at random. The proposed ELW estimator is computationally simple and achieves semiparametric efficiency if the probability of missingness is correctly specified. The limiting covariance matrix of the ELW estimator can be estimated by a resampling technique, which does not involve nonparametric density estimation or numerical derivatives. Simulation results show that the ELW method works remarkably well in finite samples. A real data example is used to illustrate the proposed ELW method.  相似文献   

14.
This paper introduces a new shrinkage estimator for the negative binomial regression model that is a generalization of the estimator proposed for the linear regression model by Liu [A new class of biased estimate in linear regression, Comm. Stat. Theor. Meth. 22 (1993), pp. 393–402]. This shrinkage estimator is proposed in order to solve the problem of an inflated mean squared error of the classical maximum likelihood (ML) method in the presence of multicollinearity. Furthermore, the paper presents some methods of estimating the shrinkage parameter. By means of Monte Carlo simulations, it is shown that if the Liu estimator is applied with these shrinkage parameters, it always outperforms ML. The benefit of the new estimation method is also illustrated in an empirical application. Finally, based on the results from the simulation study and the empirical application, a recommendation regarding which estimator of the shrinkage parameter that should be used is given.  相似文献   

15.
This paper develops a varying-coefficient approach to the estimation and testing of regression quantiles under randomly truncated data. In order to handle the truncated data, the random weights are introduced and the weighted quantile regression (WQR) estimators for nonparametric functions are proposed. To achieve nice efficiency properties, we further develop a weighted composite quantile regression (WCQR) estimation method for nonparametric functions in varying-coefficient models. The asymptotic properties both for the proposed WQR and WCQR estimators are established. In addition, we propose a novel bootstrap-based test procedure to test whether the nonparametric functions in varying-coefficient quantile models can be specified by some function forms. The performance of the proposed estimators and test procedure are investigated through simulation studies and a real data example.  相似文献   

16.
In this paper we propose an alternative procedure for estimating the parameters of the beta regression model. This alternative estimation procedure is based on the EM-algorithm. For this, we took advantage of the stochastic representation of the beta random variable through ratio of independent gamma random variables. We present a complete approach based on the EM-algorithm. More specifically, this approach includes point and interval estimations and diagnostic tools for detecting outlying observations. As it will be illustrated in this paper, the EM-algorithm approach provides a better estimation of the precision parameter when compared to the direct maximum likelihood (ML) approach. We present the results of Monte Carlo simulations to compare EM-algorithm and direct ML. Finally, two empirical examples illustrate the full EM-algorithm approach for the beta regression model. This paper contains a Supplementary Material.  相似文献   

17.
The generalized Pareto distribution (GPD) has been widely used in the extreme value framework. The success of the GPD when applied to real data sets depends substantially on the parameter estimation process. Several methods exist in the literature for estimating the GPD parameters. Mostly, the estimation is performed by maximum likelihood (ML). Alternatively, the probability weighted moments (PWM) and the method of moments (MOM) are often used, especially when the sample sizes are small. Although these three approaches are the most common and quite useful in many situations, their extensive use is also due to the lack of knowledge about other estimation methods. Actually, many other methods, besides the ones mentioned above, exist in the extreme value and hydrological literatures and as such are not widely known to practitioners in other areas. This paper is the first one of two papers that aim to fill in this gap. We shall extensively review some of the methods used for estimating the GPD parameters, focusing on those that can be applied in practical situations in a quite simple and straightforward manner.  相似文献   

18.
The accelerated failure time (AFT) model is an important regression tool to study the association between failure time and covariates. In this paper, we propose a robust weighted generalized M (GM) estimation for the AFT model with right-censored data by appropriately using the Kaplan–Meier weights in the GM–type objective function to estimate the regression coefficients and scale parameter simultaneously. This estimation method is computationally simple and can be implemented with existing software. Asymptotic properties including the root-n consistency and asymptotic normality are established for the resulting estimator under suitable conditions. We further show that the method can be readily extended to handle a class of nonlinear AFT models. Simulation results demonstrate satisfactory finite sample performance of the proposed estimator. The practical utility of the method is illustrated by a real data example.  相似文献   

19.
There are many methods for analyzing longitudinal ordinal response data with random dropout. These include maximum likelihood (ML), weighted estimating equations (WEEs), and multiple imputations (MI). In this article, using a Markov model where the effect of previous response on the current response is investigated as an ordinal variable, the likelihood is partitioned to simplify the use of existing software. Simulated data, generated to present a three-period longitudinal study with random dropout, are used to compare performance of ML, WEE, and MI methods in terms of standardized bias and coverage probabilities. These estimation methods are applied to a real medical data set.  相似文献   

20.
We propose a weighted empirical likelihood approach to inference with multiple samples, including stratified sampling, the estimation of a common mean using several independent and non-homogeneous samples and inference on a particular population using other related samples. The weighting scheme and the basic result are motivated and established under stratified sampling. We show that the proposed method can ideally be applied to the common mean problem and problems with related samples. The proposed weighted approach not only provides a unified framework for inference with multiple samples, including two-sample problems, but also facilitates asymptotic derivations and computational methods. A bootstrap procedure is also proposed in conjunction with the weighted approach to provide better coverage probabilities for the weighted empirical likelihood ratio confidence intervals. Simulation studies show that the weighted empirical likelihood confidence intervals perform better than existing ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号