首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
The Expectation–Maximization (EM) algorithm is a very popular technique for maximum likelihood estimation in incomplete data models. When the expectation step cannot be performed in closed form, a stochastic approximation of EM (SAEM) can be used. Under very general conditions, the authors have shown that the attractive stationary points of the SAEM algorithm correspond to the global and local maxima of the observed likelihood. In order to avoid convergence towards a local maxima, a simulated annealing version of SAEM is proposed. An illustrative application to the convolution model for estimating the coefficients of the filter is given.  相似文献   

2.
Estimation of finite mixture models when the mixing distribution support is unknown is an important problem. This article gives a new approach based on a marginal likelihood for the unknown support. Motivated by a Bayesian Dirichlet prior model, a computationally efficient stochastic approximation version of the marginal likelihood is proposed and large-sample theory is presented. By restricting the support to a finite grid, a simulated annealing method is employed to maximize the marginal likelihood and estimate the support. Real and simulated data examples show that this novel stochastic approximation and simulated annealing procedure compares favorably with existing methods.  相似文献   

3.
挖掘期货理论价格和实际价格之间的关系有助于提高期货市场定价效率、发挥期货价格发现功能。基于持有成本定价模型计算期货定价偏差,利用连续混合正态分布模型对定价偏差的分布进行拟合,先采用基于牛顿迭代的极大似然估计法对未知参数进行估计,再进一步利用模拟退火算法对牛顿迭代的结果进行优化。结果发现,模拟退火算法可以有效提高估计精度,连续混合正态分布模型能够更好地拟合期货定价偏差分布。  相似文献   

4.
The generalized extreme-value has been the distribution of choice for modeling available maxima (or minima) data since theory has shown it to be the limiting form of the distribution of extremes. However, fits to finite samples are not always adequate. Hosking (1994) and Parida (1999) suggest the four-parameter Kappa distribution as an alternative. Hosking (1994) developed an L-moment procedure for estimation. Some compromises must be made in practice however, as seen in Parida (1999). L-moment estimators of the four-parameter Kappa distribution are not always computable nor feasible. A simulation study in this paper quantifies the extent of each problem. Maximum likelihood is investigated as an alternative method of estimation and a simulation study compares the performance of both methods of estimation. Finally, further benefits of maximum likelihood are shown when wind speeds From the Tropical Pacific are examined and the weekly maxima for 10 buoys in the area are analyzed.  相似文献   

5.
A strategy is proposed to initialize the EM algorithm in the multivariate Gaussian mixture context. It consists in randomly drawing, with a low computational cost in many situations, initial mixture parameters in an appropriate space including all possible EM trajectories. This space is simply defined by two relations between the two first empirical moments and the mixture parameters satisfied by any EM iteration. An experimental study on simulated and real data sets clearly shows that this strategy outperforms classical methods, since it has the nice property to widely explore local maxima of the likelihood function.  相似文献   

6.
Summary. The classical approach to statistical analysis is usually based upon finding values for model parameters that maximize the likelihood function. Model choice in this context is often also based on the likelihood function, but with the addition of a penalty term for the number of parameters. Though models may be compared pairwise by using likelihood ratio tests for example, various criteria such as the Akaike information criterion have been proposed as alternatives when multiple models need to be compared. In practical terms, the classical approach to model selection usually involves maximizing the likelihood function associated with each competing model and then calculating the corresponding criteria value(s). However, when large numbers of models are possible, this quickly becomes infeasible unless a method that simultaneously maximizes over both parameter and model space is available. We propose an extension to the traditional simulated annealing algorithm that allows for moves that not only change parameter values but also move between competing models. This transdimensional simulated annealing algorithm can therefore be used to locate models and parameters that minimize criteria such as the Akaike information criterion, but within a single algorithm, removing the need for large numbers of simulations to be run. We discuss the implementation of the transdimensional simulated annealing algorithm and use simulation studies to examine its performance in realistically complex modelling situations. We illustrate our ideas with a pedagogic example based on the analysis of an autoregressive time series and two more detailed examples: one on variable selection for logistic regression and the other on model selection for the analysis of integrated recapture–recovery data.  相似文献   

7.
Standard methods for maximum likelihood parameter estimation in latent variable models rely on the Expectation-Maximization algorithm and its Monte Carlo variants. Our approach is different and motivated by similar considerations to simulated annealing; that is we build a sequence of artificial distributions whose support concentrates itself on the set of maximum likelihood estimates. We sample from these distributions using a sequential Monte Carlo approach. We demonstrate state-of-the-art performance for several applications of the proposed approach.  相似文献   

8.
In constructing a scorecard, we partition each characteristic variable into a few attributes and assign weights to those attributes. For the task, a simulated annealing algorithm has been proposed. A drawback of simulated annealing is that the number of cutpoints separating each characteristic variable into attributes is required as an input. We introduce a scoring method, called a classification spline machine (CSM), which determines cutpoints automatically via a stepwise basis selection. In this paper, we compare performances of CSM and simulated annealing on simulated datasets. The results indicate that the CSM can be useful in the construction of scorecards.  相似文献   

9.
From the exact distribution of the maximum likelihood estimator of the average lifetime based on progressive hybrid exponential censored sample, we derive an explicit expression for the Bayes risk of a sampling plan when a quadratic loss function is used. The simulated annealing algorithm is then used to determine the optimal sampling plan. Some optimal Bayes solutions under progressive hybrid and ordinary hybrid censoring schemes are presented to illustrate the effectiveness of the proposed method.  相似文献   

10.
The area of marked-point processes is well developed but simulation is still a challenging problem when mark correlations are to be included. In this paper we propose the use of simulated annealing to incorporate the spatial mark correlation into the simulations of correlated marked-point processes. Such a simulation has wide applications in areas such as inference and goodness-of-fit investigations of proposed models. The technique is applied to a forest dataset for which the results are extremely encouraging.  相似文献   

11.
ABSTRACT

A frequently encountered statistical problem is to determine if the variability among k populations is heterogeneous. If the populations are measured using different scales, comparing variances may not be appropriate. In this case, comparing coefficient of variation (CV) can be used because CV is unitless. In this paper, a non-parametric test is introduced to test whether the CVs from k populations are different. With the assumption that the populations are independent normally distributed, the Miller test, Feltz and Miller test, saddlepoint-based test, log likelihood ratio test and the proposed simulated Bartlett-corrected log likelihood ratio test are derived. Simulation results show the extreme accuracy of the simulated Bartlett-corrected log likelihood ratio test if the model is correctly specified. If the model is mis-specified and the sample size is small, the proposed test still gives good results. However, with a mis-specified model and large sample size, the non-parametric test is recommended.  相似文献   

12.
Genetic algorithms (GAs) are adaptive search techniques designed to find near-optimal solutions of large scale optimization problems with multiple local maxima. Standard versions of the GA are defined for objective functions which depend on a vector of binary variables. The problem of finding the maximum a posteriori (MAP) estimate of a binary image in Bayesian image analysis appears to be well suited to a GA as images have a natural binary representation and the posterior image probability is a multi-modal objective function. We use the numerical optimization problem posed in MAP image estimation as a test-bed on which to compare GAs with simulated annealing (SA), another all-purpose global optimization method. Our conclusions are that the GAs we have applied perform poorly, even after adaptation to this problem. This is somewhat unexpected, given the widespread claims of GAs' effectiveness, but it is in keeping with work by Jennison and Sheehan (1995) which suggests that GAs are not adept at handling problems involving a great many variables of roughly equal influence.We reach more positive conclusions concerning the use of the GA's crossover operation in recombining near-optimal solutions obtained by other methods. We propose a hybrid algorithm in which crossover is used to combine subsections of image reconstructions obtained using SA and we show that this algorithm is more effective and efficient than SA or a GA individually.  相似文献   

13.
We study the spatial optimal sampling design for covariance parameter estimation. The spatial process is modeled as a Gaussian random field and maximum likelihood (ML) is used to estimate the covariance parameters. We use the log determinant of the inverse Fisher information matrix as the design criterion and run simulations to investigate the relationship between the inverse Fisher information matrix and the covariance matrix of the ML estimates. A simulated annealing algorithm is developed to search for an optimal design among all possible designs on a fine grid. Since the design criterion depends on the unknown parameters, we define relative efficiency of a design and consider minimax and Bayesian criteria to find designs that are robust for a range of parameter values. Simulation results are presented for the Matérn class of covariance functions.  相似文献   

14.
Recently, it has been shown that empirical likelihood ratios can be used to form confidence intervals and test hypothesis just like the parametric case. We illustrate here the use of a particular kind of one-parameter sub-family of distributions in the analysis of empirical likelihood with censored data. This approach not only simplifies the theoretical analysis of the limiting behavior of the empirical likelihood ratio, it also gave us clues for the numerical search of constrained maxima of an empirical likelihood.  相似文献   

15.
Multivariate extreme value statistical analysis is concerned with observations on several variables which are thought to possess some degree of tail dependence. The main approaches to inference for multivariate extremes consist in approximating either the distribution of block component‐wise maxima or the distribution of the exceedances over a high threshold. Although the expressions of the asymptotic density functions of these distributions may be characterized, they cannot be computed in general. In this paper, we study the case where the spectral random vector of the multivariate max‐stable distribution has known conditional distributions. The asymptotic density functions of the multivariate extreme value distributions may then be written through univariate integrals that are easily computed or simulated. The asymptotic properties of two likelihood estimators are presented, and the utility of the method is examined via simulation.  相似文献   

16.
We describe an image reconstruction problem and the computational difficulties arising in determining the maximum a posteriori (MAP) estimate. Two algorithms for tackling the problem, iterated conditional modes (ICM) and simulated annealing, are usually applied pixel by pixel. The performance of this strategy can be poor, particularly for heavily degraded images, and as a potential improvement Jubb and Jennison (1991) suggest the cascade algorithm in which ICM is initially applied to coarser images formed by blocking squares of pixels. In this paper we attempt to resolve certain criticisms of cascade and present a version of the algorithm extended in definition and implementation. As an illustration we apply our new method to a synthetic aperture radar (SAR) image. We also carry out a study of simulated annealing, with and without cascade, applied to a more tractable minimization problem from which we gain insight into the properties of cascade algorithms.  相似文献   

17.
In this paper we consider the optimal decomposition of Bayesian networks. More concretely, we examine empirically the applicability of genetic algorithms to the problem of the triangulation of moral graphs. This problem constitutes the only difficult step in the evidence propagation algorithm of Lauritzen and Spiegelhalter (1988) and is known to be NP-hard (Wen, 1991). We carry out experiments with distinct crossover and mutation operators and with different population sizes, mutation rates and selection biasses. The results are analysed statistically. They turn out to improve the results obtained with most other known triangulation methods (Kjærulff, 1990) and are comparable to results obtained with simulated annealing (Kjærulff, 1990; Kjærulff, 1992).  相似文献   

18.
The problem of interest is to estimate the concentration curve and the area under the curve (AUC) by estimating the parameters of a linear regression model with an autocorrelated error process. We introduce a simple linear unbiased estimator of the concentration curve and the AUC. We show that this estimator constructed from a sampling design generated by an appropriate density is asymptotically optimal in the sense that it has exactly the same asymptotic performance as the best linear unbiased estimator. Moreover, we prove that the optimal design is robust with respect to a minimax criterion. When repeated observations are available, this estimator is consistent and has an asymptotic normal distribution. Finally, a simulated annealing algorithm is applied to a pharmacokinetic model with correlated errors.  相似文献   

19.
A simple method is proposed to detect the number of change points in a sequence of independent exponential family random variables. An estimator to maximize some criterion, say SC ( k ), which is to maximize the log- likelihood function with some penalty term, is used in detection. Under some mild assumptions, the consistency of the estimator for the true number of change points and the boundedness between the estimated change locations and the true change location are obtained. Some simulated results are given, and the Nile problem is investigated by this method.  相似文献   

20.
The EM algorithm is a popular method for maximizing a likelihood in the presence of incomplete data. When the likelihood has multiple local maxima, the parameter space can be partitioned into domains of convergence, one for each local maximum. In this paper we investigate these domains for the location family generated by the t-distribution. We show that, perhaps somewhat surprisingly, these domains need not be connected sets. As an extreme case we give an example of a domain which consists of an infinite union of disjoint open intervals. Thus the convergence behaviour of the EM algorithm can be quite sensitive to the starting point.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号