首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Genetic algorithms (GAs) are adaptive search techniques designed to find near-optimal solutions of large scale optimization problems with multiple local maxima. Standard versions of the GA are defined for objective functions which depend on a vector of binary variables. The problem of finding the maximum a posteriori (MAP) estimate of a binary image in Bayesian image analysis appears to be well suited to a GA as images have a natural binary representation and the posterior image probability is a multi-modal objective function. We use the numerical optimization problem posed in MAP image estimation as a test-bed on which to compare GAs with simulated annealing (SA), another all-purpose global optimization method. Our conclusions are that the GAs we have applied perform poorly, even after adaptation to this problem. This is somewhat unexpected, given the widespread claims of GAs' effectiveness, but it is in keeping with work by Jennison and Sheehan (1995) which suggests that GAs are not adept at handling problems involving a great many variables of roughly equal influence.We reach more positive conclusions concerning the use of the GA's crossover operation in recombining near-optimal solutions obtained by other methods. We propose a hybrid algorithm in which crossover is used to combine subsections of image reconstructions obtained using SA and we show that this algorithm is more effective and efficient than SA or a GA individually.  相似文献   

2.
Modern sampling designs in survey statistics, in general, are constructed in order to optimize the accuracy of estimators such as totals, means and proportions. In stratified random sampling a variance minimal solution was introduced by Neyman and Tschuprov. However, practical constraints may lead to limitations of the domain of sampling fractions which have to be considered within the optimization process. Special attention on the complexity of numerical solutions has to be paid in cases with many strata or when the optimal allocation has to be applied repeatedly, such as in iterative solutions of stratification problems. The present article gives an overview of recent numerical algorithms which allow adequate inclusion of box constraints in the numerical optimization process. These box constraints may play an important role in statistical modeling. Furthermore, a new approach through a fixed point iteration with a finite termination property is presented.  相似文献   

3.
In this paper we develop a study on several types of parallel genetic algorithms (PGAs). Our motivation is to bring some uniformity to the proposal, comparison, and knowledge exchange among the traditionally opposite kinds of serial and parallel GAs. We comparatively analyze the properties of steady-state, generational, and cellular genetic algorithms. Afterwards, this study is extended to consider a distributed model consisting in a ring of GA islands. The analyzed features are the time complexity, selection pressure, schema processing rates, efficacy in finding an optimum, efficiency, speedup, and resistance to scalability. Besides that, we briefly discuss how the migration policy affects the search. Also, some of the search properties of cellular GAs are investigated. The selected benchmark is a representative subset of problems containing real world difficulties. We often conclude that parallel GAs are numerically better and faster than equivalent sequential GAs. Our aim is to shed some light on the advantages and drawbacks of various sequential and parallel GAs to help researchers using them in the very diverse application fields of the evolutionary computation.  相似文献   

4.
CVX‐based numerical algorithms are widely and freely available for solving convex optimization problems but their applications to solve optimal design problems are limited. Using the CVX programs in MATLAB, we demonstrate their utility and flexibility over traditional algorithms in statistics for finding different types of optimal approximate designs under a convex criterion for nonlinear models. They are generally fast and easy to implement for any model and any convex optimality criterion. We derive theoretical properties of the algorithms and use them to generate new A‐, c‐, D‐ and E‐optimal designs for various nonlinear models, including multi‐stage and multi‐objective optimal designs. We report properties of the optimal designs and provide sample CVX program codes for some of our examples that users can amend to find tailored optimal designs for their problems. The Canadian Journal of Statistics 47: 374–391; 2019 © 2019 Statistical Society of Canada  相似文献   

5.
Genetic algorithms are a set of algorithms with properties which enable them to efficiently search large solution spaces where conventional statistical methodology is inappropriate. They have been used to find effective control and design strategies in industry, for finding rules relating factors and outcomes in medicine and business, and for solving problems ranging from function optimization to identification of patterns in data. They work using ideas from biology, specifically from population genetics, and are appealing because of their robustness in the presence of noise and their ability to cope with highly non-linear, multimodal and multivariate problems. This paper reviews the current literature on genetic algorithms. It looks at ways of defining genetic algorithms for various problems, and examples are introduced to illustrate their application in different contexts. It summarizes the different aspects which have been, and continue to be, the focus of research, and areas requiring further invetigation are identified.  相似文献   

6.
We consider a linear regression model where there are group structures in covariates. The group LASSO has been proposed for group variable selections. Many nonconvex penalties such as smoothly clipped absolute deviation and minimax concave penalty were extended to group variable selection problems. The group coordinate descent (GCD) algorithm is used popularly for fitting these models. However, the GCD algorithms are hard to be applied to nonconvex group penalties due to computational complexity unless the design matrix is orthogonal. In this paper, we propose an efficient optimization algorithm for nonconvex group penalties by combining the concave convex procedure and the group LASSO algorithm. We also extend the proposed algorithm for generalized linear models. We evaluate numerical efficiency of the proposed algorithm compared to existing GCD algorithms through simulated data and real data sets.  相似文献   

7.
Many different algorithms have been proposed to solve penalized variable selection problems, in particular lasso and its variants, including group lasso and fused lasso. Loss functions other than quadratic loss also pose significant challenges for finding efficient solvers. Here, we note that Nesterov’s method can be used to transform an optimization problem with general smooth convex loss to quadratic loss with identity covariate matrix in each iteration. After such reduction, the problem becomes much easier to solve or even can be solved in closed form in some cases. We perform some simulations and apply our implementation to phoneme discrimination.  相似文献   

8.
Machine scheduling and covering problems may occur in many applications such as load balancing in network communication channel assignment, parallel processing in large-size computing, task arrangement in flexible manufacturing systems, etc. In this paper we study machine covering problems with combined partial information on m   parallel identical machines. We consider sequences where the processing time of all jobs are at most 1/k1/k times of the optimal value (for an integer k). For the case where the optimal value is not known in advance, we show that LS algorithm is optimal. For the case where the optimal value is known in advance, we give lower bounds and present semi-online algorithms.  相似文献   

9.
For decision problems with only incomplete informations about the distribution of the states the numerical computing of all maxEmin-optimal solutions often is difficult or impossible. In this paper we present a numerical solution method for the computing of all maxEmin-optimal solutions for decision problems with a finite number of actions and states. Having a linear partial information essentially only linear programs must be solved for finding all optimal solutions.  相似文献   

10.
The numerical treatment of nonlinear model fitting problems often can be simplified by manipulating the model equations. Algebraic manipulations including nonlinear transformations of model parameters, do not change the numerical result of the adjustment and can be a powerful method to improve the performance of solution algorithms. Nonlinear transformations of the observations, on the other hand, do change the numerical results unless the normal equations are transformed accordingly. The latter transformation has been neglected by previous authors and this article provides a complete set of formulas that are needed to implement transformations of observations. The transformations are in general less useful than parameter transformations for the improvement of algorithms but may have other applications in particular situations.  相似文献   

11.
The complete-data model that underlies an Expectation-Maximization (EM) algorithm must have a parameter space that coincides with the parameter space of the observed-data model. Otherwise, maximization of the observed-data log-likelihood will be carried out over a space that does not coincide with the desired parameter space. In some contexts, however, a natural complete-data model may be defined only for parameter values within a subset of the observed-data parameter space. In this paper we discuss situations where this can still be useful if the complete-data model can be viewed as a member of a finite family of complete-data models that have parameter spaces which collectively cover the observed-data parameter space. Such a family of complete-data models defines a family of EM algorithms which together lead to a finite collection of constrained maxima of the observed-data log-likelihood. Maximization of the log-likelihood function over the full parameter space then involves identifying the constrained maximum that achieves the greatest log-likelihood value. Since optimization over a finite collection of candidates is referred to as combinatorial optimization, we refer to such a family of EM algorithms as a combinatorial EM (CEM) algorithm. As well as discussing the theoretical concepts behind CEM algorithms, we discuss strategies for improving the computational efficiency when the number of complete-data models is large. Various applications of CEM algorithms are also discussed, ranging from simple examples that illustrate the concepts, to more substantive examples that demonstrate the usefulness of CEM algorithms in practice.  相似文献   

12.
The authors propose a class of procedures for local likelihood estimation from data that are either interval‐censored or that have been aggregated into bins. One such procedure relies on an algorithm that generalizes existing self‐consistency algorithms by introducing kernel smoothing at each step of the iteration. The entire class of procedures yields estimates that are obtained as solutions of fixed point equations. By discretizing and applying numerical integration, the authors use fixed point theory to study convergence of algorithms for the class. Rapid convergence is effected by the implementation of a local EM algorithm as a global Newton iteration. The latter requires an explicit solution of the local likelihood equations which can be found by using the symbolic Newton‐Raphson algorithm, if necessary.  相似文献   

13.
Models for geostatistical data introduce spatial dependence in the covariance matrix of location-specific random effects. This is usually defined to be a parametric function of the distances between locations. Bayesian formulations of such models overcome asymptotic inference and estimation problems involved in maximum likelihood-based approaches and can be fitted using Markov chain Monte Carlo (MCMC) simulation. The MCMC implementation, however, requires repeated inversions of the covariance matrix which makes the problem computationally intensive, especially for large number of locations. In the present work, we propose to convert the spatial covariance matrix to a sparse matrix and compare a number of numerical algorithms especially suited within the MCMC framework in order to accelerate large matrix inversion. The algorithms are assessed empirically on simulated datasets of different size and sparsity. We conclude that the band solver applied after ordering the distance matrix reduces the computational time in inverting covariance matrices substantially.  相似文献   

14.
Summary.  The problem of optimizing a number of simultaneous bets is considered, using primarily log-utility. Stochastic gradient-based algorithms for solving this problem are developed and compared with the simplex method. The solutions may be regarded as a generalization of 'Kelly staking' to the case of many simultaneous bets. Properties of the solutions are examined in two example cases using real odds from sports bookmakers. The algorithms that are developed also have wide applicability beyond sports betting and may be extended to general portfolio optimization problems, with any reasonable utility function.  相似文献   

15.
Sequential minimal optimization (SMO) algorithm is effective in solving large-scale support vector machine (SVM). The existing algorithms all assume that the kernels are positive definite (PD) or positive semi-definite (PSD) and should meet the Mercer condition. Some kernels, however, such as sigmoid kernel, which originates from neural network and then is extensively used in SVM, are conditionally PD in certain circumstances; in addition, practically, it is often difficult to prove whether a kernel is PD or PSD or not except some well-known kernels. So, the applications of the existing algorithm of SMO are limited. Considering the deficiency of the traditional ones, this algorithm of solving ?-SVR with nonpositive semi-definite (non-PSD) kernels is proposed. Different from the existing algorithms which must consider four Lagrange multipliers, the algorithm proposed in this article just need to consider two Lagrange multipliers in the process of implementation. The proposed algorithm simplified the implementation by expanding the original dual programming of ?-SVR and solving its KKT conditions, thus being easily applied in solving ?-SVR with non-PSD kernels. The presented algorithm is evaluated using five benchmark problems and one reality problem. The results show that ?-SVR with non-PSD provides more accurate prediction than that with PD kernel.  相似文献   

16.

Regression spline smoothing is a popular approach for conducting nonparametric regression. An important issue associated with it is the choice of a "theoretically best" set of knots. Different statistical model selection methods, such as Akaike's information criterion and generalized cross-validation, have been applied to derive different "theoretically best" sets of knots. Typically these best knot sets are defined implicitly as the optimizers of some objective functions. Hence another equally important issue concerning regression spline smoothing is how to optimize such objective functions. In this article different numerical algorithms that are designed for carrying out such optimization problems are compared by means of a simulation study. Both the univariate and bivariate smoothing settings will be considered. Based on the simulation results, recommendations for choosing a suitable optimization algorithm under various settings will be provided.  相似文献   

17.
Summary. Solving Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data has many applications for dynamic models. A large number of algorithms based on particle filtering methods, also known as sequential Monte Carlo algorithms, have recently been proposed to solve these problems. We propose a special particle filtering method which uses random mixtures of normal distributions to represent the posterior distributions of partially observed Gaussian state space models. This algorithm is based on a marginalization idea for improving efficiency and can lead to substantial gains over standard algorithms. It differs from previous algorithms which were only applicable to conditionally linear Gaussian state space models. Computer simulations are carried out to evaluate the performance of the proposed algorithm for dynamic tobit and probit models.  相似文献   

18.
Although applications of Bayesian analysis for numerical quadrature problems have been considered before, it is only very recently that statisticians have focused on the connections between statistics and numerical analysis of differential equations. In line with this very recent trend, we show how certain commonly used finite difference schemes for numerical solutions of ordinary and partial differential equations can be considered in a regression setting. Focusing on this regression framework, we apply a simple Bayesian strategy to obtain confidence intervals for the finite difference solutions. We apply this framework on several examples to show how the confidence intervals are related to truncation error and illustrate the utility of the confidence intervals for the examples considered.  相似文献   

19.
The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified.  相似文献   

20.
Searching for regions of the input space where a statistical model is inappropriate is useful in many applications. The study proposes an algorithm for finding local departures from a regression-type prediction model. The algorithm returns low-dimensional hypercubes where the average prediction error clearly departs from zero. The study describes the developed algorithm, and shows successful applications on the simulated and real data from the steel plate production. The algorithms that have been originally developed for searching regions of the high-response value from the input space are reviewed and considered as alternative methods for locating model departures. The proposed algorithm succeeds in locating the model departure regions better than the compared alternatives. The algorithm can be utilized in sequential follow-up of a model as time goes along and new data are observed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号