首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The EM algorithm and its extensions are very popular tools for maximum likelihood estimation in incomplete data setting. However, one of the limitations of these methods is their slow convergence. The PX-EM (parameter-expanded EM) algorithm was proposed by Liu, Rubin and Wu to make EM much faster. On the other hand, stochastic versions of EM are powerful alternatives of EM when the E-step is untractable in a closed form. In this paper we propose the PX-SAEM which is a parameter expansion version of the so-called SAEM (Stochastic Approximation version of EM). PX-SAEM is shown to accelerate SAEM and improve convergence toward the maximum likelihood estimate in a parametric framework. Numerical examples illustrate the behavior of PX-SAEM in linear and nonlinear mixed effects models.  相似文献   

2.
We propose a new methodology for maximum likelihood estimation in mixtures of non linear mixed effects models (NLMEM). Such mixtures of models include mixtures of distributions, mixtures of structural models and mixtures of residual error models. Since the individual parameters inside the NLMEM are not observed, we propose to combine the EM algorithm usually used for mixtures models when the mixture structure concerns an observed variable, with the Stochastic Approximation EM (SAEM) algorithm, which is known to be suitable for maximum likelihood estimation in NLMEM and also has nice theoretical properties. The main advantage of this hybrid procedure is to avoid a simulation step of unknown group labels required by a “full” version of SAEM. The resulting MSAEM (Mixture SAEM) algorithm is now implemented in the Monolix software. Several criteria for classification of subjects and estimation of individual parameters are also proposed. Numerical experiments on simulated data show that MSAEM performs well in a general framework of mixtures of NLMEM. Indeed, MSAEM provides an estimator close to the maximum likelihood estimator in very few iterations and is robust with regard to initialization. An application to pharmacokinetic (PK) data demonstrates the potential of the method for practical applications.  相似文献   

3.
Application of the minimum distance (MD) estimation method to the linear regression model for estimating regression parameters is a difficult and time-consuming process due to the complexity of its distance function, and hence, it is computationally expensive. To deal with the computational cost, this paper proposes a fast algorithm which makes the best use of coordinate-wise minimization technique in order to obtain the MD estimator. R package (KoulMde) based on the proposed algorithm and written in Rcpp is available online.  相似文献   

4.
A simulation experiment compares the accuracy and precision of three alternate estimation techniques for the parameters of the STARMA model. Maximum likelihood estimation, in most ways the "best" estimation procedure, involves a large amount of computational effort so that two approximate techniques, exact least squares and conditional maximum likelihood, are often proposed for series of moderate lengths. This simulation experiment compares the accuracy of these three estimation procedures for simulated series of various lengths, and discusses the appropriateness of the three procedures as a function of the length of the observed series.  相似文献   

5.
Life tables used in life insurance determine the age of death distribution only at integer ages. Therefore, actuaries make fractional age assumptions to interpolate between integer age values when they have to value payments that are not restricted to integer ages. Traditional fractional age assumptions as well as the fractional independence assumption are easy to apply but result in a non-intuitive overall shape of the force of mortality. Other approaches proposed either require expensive optimization procedures or produce many discontinuities. We suggest a new, computationally inexpensive algorithm to select the parameters within the LFM-family introduced by Jones and Mereu (Insur Math Econ 27:261–276, 2000). In contrast to previously suggested methods, our algorithm enforces a monotone force of mortality between integer ages if the mortality rates are monotone and keeps the number of discontinuities small.  相似文献   

6.
In this paper several alternative robust reqression techniques are compared for estimating parameters of a Weibull distribution . In addition to the usual least squares (L2) and least absolute deviation (L1) methods, a number of one-step reweighting schemes based on the L1residuals are considered. The results of an extensive series of Monte Carlo simulation experiments demonstrate that the Anscmbe reweighting scheme generally produces the best Weibull estimates over the range of sample sizes and parameter values studied.  相似文献   

7.
In this paper, we consider the four-parameter bivariate generalized exponential distribution proposed by Kundu and Gupta [Bivariate generalized exponential distribution, J. Multivariate Anal. 100 (2009), pp. 581–593] and propose an expectation–maximization algorithm to find the maximum-likelihood estimators of the four parameters under random left censoring. A numerical experiment is carried out to discuss the properties of the estimators obtained iteratively.  相似文献   

8.
This paper discusses exact joint confidence region for the shape parameter β and scale parameter η of the two-parameter Weibull distribution. As an application, the joint confidence region is used to obtain a conservative lower confidence bound for the reliability function. The method can be used for both complete and censored samples.  相似文献   

9.
We consider the problem of estimating a vector interesting parameter in the presence of nuisance parameters through vector unbiased statistical estimation functions (USEFs). An extension of the Cramer—Rao inequality relevant to the present problem is obtained. Three possible optimality criteria in the class of regular vector USEFs are those based on (i) the non-negative definiteness of the difference of dispersion matrices (ii) the trace of the dispersion matrix and (iii) the determinant of the dispersion matrix. We refer to these three criteria as M-optimality, T- optimality and D-optimality respectively. The equivalence of these three optimality criteria is established. By restricting the class of regular USEFs considered by Ferreira (1982), we study some interesting properties of the standardized USEFs and establish essential uniqueness of standardized M-optimal USEF in this restricted class. Finally some illustrative examples are included.  相似文献   

10.
In this paper, we propose a consistent method of estimation for the parameters of the three-parameter inverse Gaussian distribution. We then discuss some properties of these estimators and show by means of a Monte Carlo simulation study that the proposed estimators perform better than some other prominent estimators in terms of bias and root mean squared error. Finally, we present two real-life examples to illustrate the method of inference developed here.  相似文献   

11.
Many situations, especially in Bayesian statistical inference, call for the use of a Markov chain Monte Carlo (MCMC) method as a way to draw approximate samples from an intractable probability distribution. With the use of any MCMC algorithm comes the question of how long the algorithm must run before it can be used to draw an approximate sample from the target distribution. A common method of answering this question involves verifying that the Markov chain satisfies a drift condition and an associated minorization condition (Rosenthal, J Am Stat Assoc 90:558–566, 1995; Jones and Hobert, Stat Sci 16:312–334, 2001). This is often difficult to do analytically, so as an alternative, it is typical to rely on output-based methods of assessing convergence. The work presented here gives a computational method of approximately verifying a drift condition and a minorization condition specifically for the symmetric random-scan Metropolis algorithm. Two examples of the use of the method described in this article are provided, and output-based methods of convergence assessment are presented in each example for comparison with the upper bound on the convergence rate obtained via the simulation-based approach.  相似文献   

12.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data.  相似文献   

13.
We propose a new stochastic approximation (SA) algorithm for maximum-likelihood estimation (MLE) in the incomplete-data setting. This algorithm is most useful for problems when the EM algorithm is not possible due to an intractable E-step or M-step. Compared to other algorithm that have been proposed for intractable EM problems, such as the MCEM algorithm of Wei and Tanner (1990), our proposed algorithm appears more generally applicable and efficient. The approach we adopt is inspired by the Robbins-Monro (1951) stochastic approximation procedure, and we show that the proposed algorithm can be used to solve some of the long-standing problems in computing an MLE with incomplete data. We prove that in general O(n) simulation steps are required in computing the MLE with the SA algorithm and O(n log n) simulation steps are required in computing the MLE using the MCEM and/or the MCNR algorithm, where n is the sample size of the observations. Examples include computing the MLE in the nonlinear error-in-variable model and nonlinear regression model with random effects.  相似文献   

14.
The purpose of this paper is to provide a method for constructing exact joint confidence regions for the parameters of type I (maximum) and type I (minimum) extreme value distributions. Joint confidence regions for the parameters of Weibull distributions are also discussed. The calculation for these joint confidence regions requires a small computer program.  相似文献   

15.
Two approaches to estimating rate parameters in stochastic compartmental models are compared. It is shown that the methods are essentially equivalent both computationally and in terms of asymptotic efficiency. A direct comparison is made using data from a cancer treatment follow-up study. Keywords:Compartmental model, Markov process, Conditional mean and variance.  相似文献   

16.
Universal kriging is a form of interpolation that takes into account the local trends in data when minimizing the error associated with the estimator. Under multivariate normality assumptions, the given predictor is the best linear unbiased predictor. but if the underlying distribution is not normal, the estimator will not be unbiased and will be vulnerable to outliers. With spatial data, it is not only the presence of outliers that may spoil the predictions, but also the boundary sites. usually corners, that tend to have high leverage. As an alternative, a weighted one-step generalized M estimator of the location parameters in a spatial linear model is proposed. It is especially recommended in the case of irregularly spaced data.  相似文献   

17.
To improve the out-of-sample performance of the portfolio, Lasso regularization is incorporated to the Mean Absolute Deviance (MAD)-based portfolio selection method. It is shown that such a portfolio selection problem can be reformulated as a constrained Least Absolute Deviance problem with linear equality constraints. Moreover, we propose a new descent algorithm based on the ideas of ‘nonsmooth optimality conditions’ and ‘basis descent direction set’. The resulting MAD-Lasso method enjoys at least two advantages. First, it does not involve the estimation of covariance matrix that is difficult particularly in the high-dimensional settings. Second, sparsity is encouraged. This means that assets with weights close to zero in the Markovwitz's portfolio are driven to zero automatically. This reduces the management cost of the portfolio. Extensive simulation and real data examples indicate that if the Lasso regularization is incorporated, MAD portfolio selection method is consistently improved in terms of out-of-sample performance, measured by Sharpe ratio and sparsity. Moreover, simulation results suggest that the proposed descent algorithm is more time-efficient than interior point method and ADMM algorithm.  相似文献   

18.
19.
We describe an image reconstruction problem and the computational difficulties arising in determining the maximum a posteriori (MAP) estimate. Two algorithms for tackling the problem, iterated conditional modes (ICM) and simulated annealing, are usually applied pixel by pixel. The performance of this strategy can be poor, particularly for heavily degraded images, and as a potential improvement Jubb and Jennison (1991) suggest the cascade algorithm in which ICM is initially applied to coarser images formed by blocking squares of pixels. In this paper we attempt to resolve certain criticisms of cascade and present a version of the algorithm extended in definition and implementation. As an illustration we apply our new method to a synthetic aperture radar (SAR) image. We also carry out a study of simulated annealing, with and without cascade, applied to a more tractable minimization problem from which we gain insight into the properties of cascade algorithms.  相似文献   

20.
James-Stein estimators are proposed for the #-parameter of an inverse Gaussian #G# distribution. The estimators of this class have smaller expected quadratic loss than the maximum likelihood estimator usually employed when analysing real sets of data. This problem is also studied for the case of an unknown nuisance parameter. Finally, improved estimators are considered for # in the two sample problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号