首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Nonlinear programming problem is the general case of mathematical programming problem such that both the objective and constraint functions are nonlinear and is the most difficult case of smooth optimization problem to solve. In this article, we suggest a stochastic search method to general nonlinear programming problems which is not an iterative algorithm but it is an interior point method. The proposed method finds the near-optimal solution to the problem. The results of a few numerical studies are reported. The efficiency of the new method is compared and is found to be reasonable.  相似文献   

2.
Conventional optimization approaches, such as Linear Programming, Dynamic Programming and Branch-and-Bound methods are well established for solving relatively simple scheduling problems. Algorithms such as Simulated Annealing, Taboo Search and Genetic Algorithms (GA) have recently been applied to large combinatorial problems. Owing to the complex nature of these problems it is often impossible to search the whole problem space and an optimal solution cannot, therefore, be guaranteed. A BiCriteria Genetic Algorithm (BCGA) has been developed for the scheduling of complex products with multiple resource constraints and deep product structure. This GA identifies and corrects infeasible schedules and takes account of the early supply of components and assemblies, late delivery of final products and capacity utilization. The research has used manufacturing data obtained from a capital goods company. Genetic Algorithms include a number of parameters, including the probabilities of crossover and mutation, the population size and the number of generations. The BCGA scheduling tool provides 16 alternative crossover operations and eight different mutation mechanisms. The overall objective of this study was to develop an efficient design-of-experiments approach to identify genetic algorithm operators and parameters that produce solutions with minimum total cost. The case studies were based upon a complex, computationally intensive scheduling problem that was insoluble using conventional approaches. This paper describes an efficient sequential experimental strategy that enabled this work to be performed within a reasonable time. The first stage was a screening experiment, which had a fractional factorial embedded within a half Latin-square design. The second stage was a half-fraction design with a reduced number of GA operators. The results are compared with previous studies. It is demonstrated that, in this case, improved GA performance was achieved using the experimental strategy proposed. The appropriate genetic operators and parameters may be case specific, leading to the view that experimental design may be the best way to proceed when finding the 'best' combination of GA operators and parameters.  相似文献   

3.
Conventional optimization approaches, such as Linear Programming, Dynamic Programming and Branch-and-Bound methods are well established for solving relatively simple scheduling problems. Algorithms such as Simulated Annealing, Taboo Search and Genetic Algorithms (GA) have recently been applied to large combinatorial problems. Owing to the complex nature of these problems it is often impossible to search the whole problem space and an optimal solution cannot, therefore, be guaranteed. A BiCriteria Genetic Algorithm (BCGA) has been developed for the scheduling of complex products with multiple resource constraints and deep product structure. This GA identifies and corrects infeasible schedules and takes account of the early supply of components and assemblies, late delivery of final products and capacity utilization. The research has used manufacturing data obtained from a capital goods company. Genetic Algorithms include a number of parameters, including the probabilities of crossover and mutation, the population size and the number of generations. The BCGA scheduling tool provides 16 alternative crossover operations and eight different mutation mechanisms. The overall objective of this study was to develop an efficient design-of-experiments approach to identify genetic algorithm operators and parameters that produce solutions with minimum total cost. The case studies were based upon a complex, computationally intensive scheduling problem that was insoluble using conventional approaches. This paper describes an efficient sequential experimental strategy that enabled this work to be performed within a reasonable time. The first stage was a screening experiment, which had a fractional factorial embedded within a half Latin-square design. The second stage was a half-fraction design with a reduced number of GA operators. The results are compared with previous studies. It is demonstrated that, in this case, improved GA performance was achieved using the experimental strategy proposed. The appropriate genetic operators and parameters may be case specific, leading to the view that experimental design may be the best way to proceed when finding the ‘best’ combination of GA operators and parameters.  相似文献   

4.
When a genetic algorithm (GA) is employed in a statistical problem, the result is affected by both variability due to sampling and the stochastic elements of algorithm. Both of these components should be controlled in order to obtain reliable results. In the present work we analyze parametric estimation problems tackled by GAs, and pursue two objectives: the first one is related to a formal variability analysis of final estimates, showing that it can be easily decomposed in the two sources of variability. In the second one we introduce a framework of GA estimation with fixed computational resources, which is a form of statistical and the computational tradeoff question, crucial in recent problems. In this situation the result should be optimal from both the statistical and computational point of view, considering the two sources of variability and the constraints on resources. Simulation studies will be presented for illustrating the proposed method and the statistical and computational tradeoff question.  相似文献   

5.
In this paper, we consider an inspection policy problem for a one-shot system with two types of units over a finite time span and want to determine inspection intervals optimally with given replacement points of Type 2 units. The interval availability and life cycle cost are used as optimization criteria and estimated by simulation. Two optimization models are proposed to find the optimal inspection intervals for the exponential and general distributions. A heuristic method and a genetic algorithm are proposed to find the near-optimal inspection intervals, to satisfy the target interval availability and minimize the life-cycle cost. We study numerical examples to compare the heuristic method with the genetic algorithm and investigate the effect of model parameters to the optimal solutions.  相似文献   

6.
The genetic algorithm is examined as a method for solving optimization problems in econometric estimation. It does not restrict either the form or regularity of the objective function, allows a reasonably large parameter space, and does not rely on a point-to-point search. The performance is evaluated through two sets of experiments on standard test problems as well as econometric problems from the literature. First, alternative genetic algorithms that vary over mutation and crossover rates, population sizes, and other features are contrasted. Second, the genetic algorithm is compared to Nelder–Mead simplex, simulated annealing, adaptive random search, and MSCORE.  相似文献   

7.
This paper develops a study on different modern optimization techniques to solve the p-median problem. We analyze the behavior of a class of evolutionary algorithm (EA) known as cellular EA (cEA), and compare it against a tailored neural network model and against a canonical genetic algorithm for optimization of the p-median problem. We also compare against existing approaches including variable neighborhood search and parallel scatter search, and show their relative performances on a large set of problem instances. Our conclusions state the advantages of using a cEA: wide applicability, low implementation effort and high accuracy. In addition, the neural network model shows up as being the more accurate tool at the price of a narrow applicability and larger customization effort.  相似文献   

8.
In this article, we deal with an optimal reliability and maintainability design problem of a searching system with complex structures. The system availability and life cycle cost are used as optimization criteria and estimated by simulation. We want to determine MTBF (Mean Time between Failures) and MTTR (Mean Time to Repair) for all components and ALDT (Administrative and Logistics Delay Times) of the searching system in order to minimize the life cycle cost and to satisfy the target system availability. A hybrid genetic algorithm with a heuristic method is proposed to find near-optimal solutions and compared with a general genetic algorithm.  相似文献   

9.
Genetic algorithms for numerical optimization   总被引:3,自引:0,他引:3  
Genetic algorithms (GAs) are stochastic adaptive algorithms whose search method is based on simulation of natural genetic inheritance and Darwinian striving for survival. They can be used to find approximate solutions to numerical optimization problems in cases where finding the exact optimum is prohibitively expensive, or where no algorithm is known. However, such applications can encounter problems that sometimes delay, if not prevent, finding the optimal solutions with desired precision. In this paper we describe applications of GAs to numerical optimization, present three novel ways to handle such problems, and give some experimental results.  相似文献   

10.
In this paper we develop a study on several types of parallel genetic algorithms (PGAs). Our motivation is to bring some uniformity to the proposal, comparison, and knowledge exchange among the traditionally opposite kinds of serial and parallel GAs. We comparatively analyze the properties of steady-state, generational, and cellular genetic algorithms. Afterwards, this study is extended to consider a distributed model consisting in a ring of GA islands. The analyzed features are the time complexity, selection pressure, schema processing rates, efficacy in finding an optimum, efficiency, speedup, and resistance to scalability. Besides that, we briefly discuss how the migration policy affects the search. Also, some of the search properties of cellular GAs are investigated. The selected benchmark is a representative subset of problems containing real world difficulties. We often conclude that parallel GAs are numerically better and faster than equivalent sequential GAs. Our aim is to shed some light on the advantages and drawbacks of various sequential and parallel GAs to help researchers using them in the very diverse application fields of the evolutionary computation.  相似文献   

11.
Estimators are often defined as the solutions to data dependent optimization problems. A common form of objective function (function to be optimized) that arises in statistical estimation is the sum of a convex function V and a quadratic complexity penalty. A standard paradigm for creating kernel-based estimators leads to such an optimization problem. This article describes an optimization algorithm designed for unconstrained optimization problems in which the objective function is the sum of a non negative convex function and a known quadratic penalty. The algorithm is described and compared with BFGS on some penalized logistic regression and penalized L 3/2 regression problems.  相似文献   

12.
We describe an image reconstruction problem and the computational difficulties arising in determining the maximum a posteriori (MAP) estimate. Two algorithms for tackling the problem, iterated conditional modes (ICM) and simulated annealing, are usually applied pixel by pixel. The performance of this strategy can be poor, particularly for heavily degraded images, and as a potential improvement Jubb and Jennison (1991) suggest the cascade algorithm in which ICM is initially applied to coarser images formed by blocking squares of pixels. In this paper we attempt to resolve certain criticisms of cascade and present a version of the algorithm extended in definition and implementation. As an illustration we apply our new method to a synthetic aperture radar (SAR) image. We also carry out a study of simulated annealing, with and without cascade, applied to a more tractable minimization problem from which we gain insight into the properties of cascade algorithms.  相似文献   

13.
This paper proposes a new approach based on two explicit rules of Mendel experiments and Mendel's population genetics for the genetic algorithm (GA). These rules are the segregation and independent assortment of alleles, respectively. This new approach has been simulated for the optimization of certain test functions. The doctrinal sense of GA is conceptually improved by this approach using a Mendelian framework. The new approach is different than the conventional one in terms of crossover, recombination, and mutation operators. The results obtained here are in agreement with those of the conventional GA, and even better in some cases. These results suggest that the new approach is overall more sensitive and accurate than the conventional one. Possible ways of improving the approach by including more genetic formulae in the code are also discussed.  相似文献   

14.
We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study.  相似文献   

15.
The economic and statistical merits of a multiple variable sampling intervals scheme are studied. The problem is formulated as a double-objective optimization problem with the adjusted average time to signal as the statistical objective and the expected cost per hour as the economic objective. Bai and Lee's [An economic design of variable sampling interval ¯X control charts. Int J Prod Econ. 1998;54:57–64] economic model is considered. Then we find the Pareto-optimal designs in which the two objectives are minimized simultaneously by using the non-dominated sorting genetic algorithm. Through an illustrative example, the advantages of the proposed approach are shown by providing a list of viable optimal solutions and graphical representations, which indicate the advantage of flexibility and adaptability of our approach.  相似文献   

16.
We develop a computationally efficient method to determine the interaction structure in a multidimensional binary sample. We use an interaction model based on orthogonal functions, and give a result on independence properties in this model. Using this result we develop an efficient approximation algorithm for estimating the parameters in a given undirected model. To find the best model, we use a heuristic search algorithm in which the structure is determined incrementally. We also give an algorithm for reconstructing the causal directions, if such exist. We demonstrate that together these algorithms are capable of discovering almost all of the true structure for a problem with 121 variables, including many of the directions.  相似文献   

17.
Order selection is an important step in the application of finite mixture models. Classical methods such as AIC and BIC discourage complex models with a penalty directly proportional to the number of mixing components. In contrast, Chen and Khalili propose to link the penalty to two types of overfitting. In particular, they introduce a regularization penalty to merge similar subpopulations in a mixture model, where the shrinkage idea of regularized regression is seamlessly employed. However, the new method requires an effective and efficient algorithm. When the popular expectation-maximization (EM)-algorithm is used, we need to maximize a nonsmooth and nonconcave objective function in the M-step, which is computationally challenging. In this article, we show that such an objective function can be transformed into a sum of univariate auxiliary functions. We then design an iterative thresholding descent algorithm (ITD) to efficiently solve the associated optimization problem. Unlike many existing numerical approaches, the new algorithm leads to sparse solutions and thereby avoids undesirable ad hoc steps. We establish the convergence of the ITD and further assess its empirical performance using both simulations and real data examples.  相似文献   

18.
余壮雄  王美今 《统计研究》2010,27(12):86-91
 本文基于数据双侧归并的一般化设定探讨了回归方程中包含归并数据时的参数估计问题。对于某些变量存在数据归并的线性模型,由于样本似然函数非常复杂,普通的一阶优化条件没有解析解,Newton-Raphson迭代也难以收敛。我们基于EM算法来计算参数的ML估计,推导了对应的参数迭代方程,给出了参数的一个闭式解。特别是,当数据双侧归并比例达到100%时,被归并的连续变量退化为虚拟变量的形式,对此,我们建议使用AIC或SC来识别回归方程中的虚拟变量是否为结构变化抑或是变量归并。  相似文献   

19.
We present upper and lower bounds for information measures, and use these to find the optimal design of experiments for Bayesian networks. The bounds are inspired by properties of the junction tree algorithm, which is commonly used for calculating conditional probabilities in graphical models like Bayesian networks. We demonstrate methods for iteratively improving the upper and lower bounds until they are sufficiently tight. We illustrate properties of the algorithm by tutorial examples in the case where we want to ensure optimality and for the case where the goal is an approximate solution with a guarantee. We further use the bounds to accelerate established algorithms for constructing useful designs. An example with petroleum fields in the North Sea is studied, where the design problem is related to exploration drilling campaigns. All of our examples consider binary random variables, but the theory can also be applied to other discrete or continuous distributions.  相似文献   

20.
Influence diagrams are powerful tools for representing and solving complex inference and decision-making problems under uncertainty. They are directed acyclic graphs with nodes and arcs that have a precise meaning. The algorithm for evaluating an influence diagram deletes nodes from the graph in a particular order given by the position of each node and its arcs with respect to the value node. In many cases, however, there is more than one possible node deletion sequence. They all lead to the optimal solution of the problem, but may involve different computational efforts, which is a primary issue when facing real-size models. Finding the optimal deletion sequence is a NP-hard problem. The proposals given in the literature have proven to require complex transformations of the influence diagram. In this paper, we present a genetic algorithm-based approach, which merely has to be added to the influence diagram evaluation algorithm we use, and whose codification is straightforward. The experiments, varying parameters like crossover and mutation operators, population sizes and mutation rates, are analysed statistically, showing favourable results over existing heuristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号