首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

2.
Consistency of Generalized Maximum Spacing Estimates   总被引:1,自引:0,他引:1  
General methods for the estimation of distributions can be derived from approximations of certain information measures. For example, both the maximum likelihood (ML) method and the maximum spacing (MSP) method can be obtained from approximations of the Kullback–Leibler information. The ideas behind the MSP method, whereby an estimation method for continuous univariate distributions is obtained from an approximation based on spacings of an information measure, were used by Ranneby & Ekstrom (1997) (using simple spacings) and Ekstrom (1997b) (using high order spacings) to obtain a class of methods, called generalized maximum spacing (GMSP) methods. In the present paper, GMSP methods will be shown to give consistent estimates under general conditions, comparable to those of Bahadur (1971) for the ML method, and those of Shao & Hahn (1999) for the MSP method. In particular, it will be proved that GMSP methods give consistent estimates in any family of distributions with unimodal densities, without any further conditions on the distributions.  相似文献   

3.
In testing product reliability, there is often a critical cutoff level that determines whether a specimen is classified as failed. One consequence is that the number of degradation data collected varies from specimen to specimen. The information of random sample size should be included in the model, and our study shows that it can be influential in estimating model parameters. Two-stage least squares (LS) and maximum modified likelihood (MML) estimation, which both assume fixed sample sizes, are commonly used for estimating parameters in the repeated measurements models typically applied to degradation data. However, the LS estimate is not consistent in the case of random sample sizes. This article derives the likelihood for the random sample size model and suggests using maximum likelihood (ML) for parameter estimation. Our simulation studies show that ML estimates have smaller biases and variances compared to the LS and MML estimates. All estimation methods can be greatly improved if the number of specimens increases from 5 to 10. A data set from a semiconductor application is used to illustrate our methods.  相似文献   

4.
A new approach, is proposed for maximum likelihood (ML) estimation in continuous univariate distributions. The procedure is used primarily to complement the ML method which can fail in situations such as the gamma and Weibull distributions when the shape parameter is, at most, unity. The new approach provides consistent and efficient estimates for all possible values of the shape parameter. Its performance is examined via simulations. Two other, improved, general methods of ML are reported for comparative purposes. The methods are used to estimate the gamma and Weibull distributions using air pollution data from Melbourne. The new ML method is accurate when the shape parameter is less than unity and is also superior to the maximum product of spacings estimation method for the Weibull distribution.  相似文献   

5.
The generalized Pareto distribution (GPD) has been widely used in the extreme value framework. The success of the GPD when applied to real data sets depends substantially on the parameter estimation process. Several methods exist in the literature for estimating the GPD parameters. Mostly, the estimation is performed by maximum likelihood (ML). Alternatively, the probability weighted moments (PWM) and the method of moments (MOM) are often used, especially when the sample sizes are small. Although these three approaches are the most common and quite useful in many situations, their extensive use is also due to the lack of knowledge about other estimation methods. Actually, many other methods, besides the ones mentioned above, exist in the extreme value and hydrological literatures and as such are not widely known to practitioners in other areas. This paper is the first one of two papers that aim to fill in this gap. We shall extensively review some of the methods used for estimating the GPD parameters, focusing on those that can be applied in practical situations in a quite simple and straightforward manner.  相似文献   

6.
Parameter estimation of the generalized Pareto distribution—Part II   总被引:1,自引:0,他引:1  
This is the second part of a paper which focuses on reviewing methods for estimating the parameters of the generalized Pareto distribution (GPD). The GPD is a very important distribution in the extreme value context. It is commonly used for modeling the observations that exceed very high thresholds. The ultimate success of the GPD in applications evidently depends on the parameter estimation process. Quite a few methods exist in the literature for estimating the GPD parameters. Estimation procedures, such as the maximum likelihood (ML), the method of moments (MOM) and the probability weighted moments (PWM) method were described in Part I of the paper. We shall continue to review methods for estimating the GPD parameters, in particular methods that are robust and procedures that use the Bayesian methodology. As in Part I, we shall focus on those that are relatively simple and straightforward to be applied to real world data.  相似文献   

7.
ABSTRACT

Censoring frequently occurs in survival analysis but naturally observed lifetimes are not of a large size. Thus, inferences based on the popular maximum likelihood (ML) estimation which often give biased estimates should be corrected in the sense of bias. Here, we investigate the biases of ML estimates under the progressive type-II censoring scheme (pIIcs). We use a method proposed in Efron and Johnstone [Fisher's information in terms of the hazard rate. Technical Report No. 264, January 1987, Stanford University, Stanford, California; 1987] to derive general expressions for bias corrected ML estimates under the pIIcs. This requires derivation of the Fisher information matrix under the pIIcs. As an application, exact expressions are given for bias corrected ML estimates of the Weibull distribution under the pIIcs. The performance of the bias corrected ML estimates and ML estimates are compared by simulations and a real data application.  相似文献   

8.
空间回归模型由于引入了空间地理信息而使得其参数估计变得复杂,因为主要采用最大似然法,致使一般人认为在空间回归模型参数估计中不存在最小二乘法。通过分析空间回归模型的参数估计技术,研究发现,最小二乘法和最大似然法分别用于估计空间回归模型的不同的参数,只有将两者结合起来才能快速有效地完成全部的参数估计。数理论证结果表明,空间回归模型参数最小二乘估计量是最佳线性无偏估计量。空间回归模型的回归参数可以在估计量为正态性的条件下而实施显著性检验,而空间效应参数则不可以用此方法进行检验。  相似文献   

9.
This paper presents a method for using end-to-end available bandwidth measurements in order to estimate available bandwidth on individual internal links. The basic approach is to use a power transform on the observed end-to-end measurements, model the result as a mixture of spatially correlated exponential random variables, carryout estimation by moment methods, then transform back to the original variables to get estimates and confidence intervals for the expected available bandwidth on each link. Because spatial dependence leads to certain parameter confounding, only upper bounds can be found reliably. Simulations with ns2 show that the method can work well and that the assumptions are approximately valid in the examples.  相似文献   

10.
The recursive least squares technique is often extended with exponential forgetting as a tool for parameter estimation in time-varying systems. The distribution of the resulting parameter estimates is, however, unknown when the forgetting factor is less than one. In this paper an approximative expression for bias of the recursively obtained parameter estimates in a time-invariant AR( na ) process with arbitrary noise is given, showing that the bias is non-zero and giving bounds on the approximation errors. Simulations confirm the approximation expressions.  相似文献   

11.
Generalized method of moments (GMM) estimation has become an important unifying framework for inference in econometrics in the last 20 years. It can be thought of as encompassing almost all of the common estimation methods, such as maximum likelihood, ordinary least squares, instrumental variables, and two-stage least squares, and nowadays is an important part of all advanced econometrics textbooks. The GMM approach links nicely to economic theory where orthogonality conditions that can serve as such moment functions often arise from optimizing behavior of agents. Much work has been done on these methods since the seminal article by Hansen, and much remains in progress. This article discusses some of the developments since Hansen's original work. In particular, it focuses on some of the recent work on empirical likelihood–type estimators, which circumvent the need for a first step in which the optimal weight matrix is estimated and have attractive information theoretic interpretations.  相似文献   

12.
Most methods for survival prediction from high-dimensional genomic data combine the Cox proportional hazards model with some technique of dimension reduction, such as partial least squares regression (PLS). Applying PLS to the Cox model is not entirely straightforward, and multiple approaches have been proposed. The method of Park et al. (Bioinformatics 18(Suppl. 1):S120–S127, 2002) uses a reformulation of the Cox likelihood to a Poisson type likelihood, thereby enabling estimation by iteratively reweighted partial least squares for generalized linear models. We propose a modification of the method of park et al. (2002) such that estimates of the baseline hazard and the gene effects are obtained in separate steps. The resulting method has several advantages over the method of park et al. (2002) and other existing Cox PLS approaches, as it allows for estimation of survival probabilities for new patients, enables a less memory-demanding estimation procedure, and allows for incorporation of lower-dimensional non-genomic variables like disease grade and tumor thickness. We also propose to combine our Cox PLS method with an initial gene selection step in which genes are ordered by their Cox score and only the highest-ranking k% of the genes are retained, obtaining a so-called supervised partial least squares regression method. In simulations, both the unsupervised and the supervised version outperform other Cox PLS methods.  相似文献   

13.
In this paper, a new life test plan called a progressively first-failure-censoring scheme introduced by Wu and Ku? [On estimation based on progressive first-failure-censored sampling, Comput. Statist. Data Anal. 53(10) (2009), pp. 3659–3670] is considered. Based on this type of censoring, the maximum likelihood (ML) and Bayes estimates for some survival time parameters namely reliability and hazard functions, as well as the parameters of the Burr-XII distribution are obtained. The Bayes estimators relative to both the symmetric and asymmetric loss functions are discussed. We use the conjugate prior for the one-shape parameter and discrete prior for the other parameter. Exact and approximate confidence intervals with the exact confidence region for the two-shape parameters are derived. A numerical example using the real data set is provided to illustrate the proposed estimation methods developed here. The ML and the different Bayes estimates are compared via a Monte Carlo simulation study.  相似文献   

14.
15.
The maximum likelihood (ML) method is used to estimate the unknown Gamma regression (GR) coefficients. In the presence of multicollinearity, the variance of the ML method becomes overstated and the inference based on the ML method may not be trustworthy. To combat multicollinearity, the Liu estimator has been used. In this estimator, estimation of the Liu parameter d is an important problem. A few estimation methods are available in the literature for estimating such a parameter. This study has considered some of these methods and also proposed some new methods for estimation of the d. The Monte Carlo simulation study has been conducted to assess the performance of the proposed methods where the mean squared error (MSE) is considered as a performance criterion. Based on the Monte Carlo simulation and application results, it is shown that the Liu estimator is always superior to the ML and recommendation about which best Liu parameter should be used in the Liu estimator for the GR model is given.  相似文献   

16.
The heteroscedasticity consistent covariance matrix estimators are commonly used for the testing of regression coefficients when error terms of regression model are heteroscedastic. These estimators are based on the residuals obtained from the method of ordinary least squares and this method yields inefficient estimators in the presence of heteroscedasticity. It is usual practice to use estimated weighted least squares method or some adaptive methods to find efficient estimates of the regression parameters when the form of heteroscedasticity is unknown. But HCCM estimators are seldom derived from such efficient estimators for testing purposes in the available literature. The current article addresses the same concern and presents the weighted versions of HCCM estimators. Our numerical work uncovers the performance of these estimators and their finite sample properties in terms of interval estimation and null rejection rate.  相似文献   

17.
Clustered binary data are common in medical research and can be fitted to the logistic regression model with random effects which belongs to a wider class of models called the generalized linear mixed model. The likelihood-based estimation of model parameters often has to handle intractable integration which leads to several estimation methods to overcome such difficulty. The penalized quasi-likelihood (PQL) method is the one that is very popular and computationally efficient in most cases. The expectation–maximization (EM) algorithm allows to estimate maximum-likelihood estimates, but requires to compute possibly intractable integration in the E-step. The variants of the EM algorithm to evaluate the E-step are introduced. The Monte Carlo EM (MCEM) method computes the E-step by approximating the expectation using Monte Carlo samples, while the Modified EM (MEM) method computes the E-step by approximating the expectation using the Laplace's method. All these methods involve several steps of approximation so that corresponding estimates of model parameters contain inevitable errors (large or small) induced by approximation. Understanding and quantifying discrepancy theoretically is difficult due to the complexity of approximations in each method, even though the focus is on clustered binary data. As an alternative competing computational method, we consider a non-parametric maximum-likelihood (NPML) method as well. We review and compare the PQL, MCEM, MEM and NPML methods for clustered binary data via simulation study, which will be useful for researchers when choosing an estimation method for their analysis.  相似文献   

18.
A compound class of zero truncated Poisson and lifetime distributions is introduced. A specialization is paved to a new three-parameter distribution, called doubly Poisson-exponential distribution, which may represent the lifetime of units connected in a series-parallel system. The new distribution can be obtained by compounding two zero truncated Poisson distributions with an exponential distribution. Among its motivations is that its hazard rate function can take different shapes such as decreasing, increasing and upside-down bathtub depending on the values of its parameters. Several properties of the new distribution are discussed. Based on progressive type-II censoring, six estimation methods [maximum likelihood, moments, least squares, weighted least squares and Bayes (under linear-exponential and general entropy loss functions) estimations] are used to estimate the involved parameters. The performance of these methods is investigated through a simulation study. The Bayes estimates are obtained using Markov chain Monte Carlo algorithm. In addition, confidence intervals, symmetric credible intervals and highest posterior density credible intervals of the parameters are obtained. Finally, an application to a real data set is used to compare the new distribution with other five distributions.  相似文献   

19.
Well-known estimation methods such as conditional least squares, quasilikelihood and maximum likelihood (ML) can be unified via a single framework of martingale estimating functions (MEFs). Asymptotic distributions of estimates for ergodic processes use constant norm (e.g. square root of the sample size) for asymptotic normality. For certain non-ergodic-type applications, however, such as explosive autoregression and super-critical branching processes, one needs a random norm in order to get normal limit distributions. In this paper, we are concerned with non-ergodic processes and investigate limit distributions for a broad class of MEFs. Asymptotic optimality (within a certain class of non-ergodic MEFs) of the ML estimate is deduced via establishing a convolution theorem using a random norm. Applications to non-ergodic autoregressive processes, generalized autoregressive conditional heteroscedastic-type processes, and super-critical branching processes are discussed. Asymptotic optimality in terms of the maximum random limiting power regarding large sample tests is briefly discussed.  相似文献   

20.
In estimating a linear measurement error model, extra information is generally needed to identify the model. Here the authors show that the polynomial structural model with errors in the endogenous and exogenous variables can be identified without any extra information if the degree is greater than one. They also show that a weighted least squares approach for the estimation of the parameters in the model leads to the same estimates as the solutions of a system of estimating equations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号