首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 667 毫秒
1.
This paper develops a study on different modern optimization techniques to solve the p-median problem. We analyze the behavior of a class of evolutionary algorithm (EA) known as cellular EA (cEA), and compare it against a tailored neural network model and against a canonical genetic algorithm for optimization of the p-median problem. We also compare against existing approaches including variable neighborhood search and parallel scatter search, and show their relative performances on a large set of problem instances. Our conclusions state the advantages of using a cEA: wide applicability, low implementation effort and high accuracy. In addition, the neural network model shows up as being the more accurate tool at the price of a narrow applicability and larger customization effort.  相似文献   

2.
An urn model is a finite collection of indistinguishable urns together with an arbitrary distribution of a finite number of balls (bills) of k colors (denominations) into the urns. A Bayes theorem expectation optimization problem associated with certain urn models is discussed.  相似文献   

3.
ABSTRACT

Empirical likelihood (EL) is a nonparametric method based on observations. EL method is defined as a constrained optimization problem. The solution of this constrained optimization problem is carried on using duality approach. In this study, we propose an alternative algorithm to solve this constrained optimization problem. The new algorithm is based on a newton-type algorithm for Lagrange multipliers for the constrained optimization problem. We provide a simulation study and a real data example to compare the performance of the proposed algorithm with the classical algorithm. Simulation and the real data results show that the performance of the proposed algorithm is comparable with the performance of the existing algorithm in terms of efficiencies and cpu-times.  相似文献   

4.
针对GM(1,1)幂模型求解初始条件的优化问题,提出一种基于原始序列新旧信息的线性组合优化方法.在模拟误差平方和最小化的目标下,构建初始条件组合权重的优化模型,给出最优组合权重的解析式.最后以中国高中升学率的数据为例,验证了此优化模型的有效性和优越性.结果表明初始条件优化方法能够有效地平衡新旧信息的权重,并提高GM(1,1)幂模型的模拟和预测精度.  相似文献   

5.
In this article, a new algorithm for rather expensive simulation problems is presented, which consists of two phases. In the first phase, as a model-based algorithm, the simulation output is used directly in the optimization stage. In the second phase, the simulation model is replaced by a valid metamodel. In addition, a new optimization algorithm is presented. To evaluate the performance of the proposed algorithm, it is applied to the (s,S) inventory problem as well as to five test functions. Numerical results show that the proposed algorithm leads to better solutions with less computational time than the corresponding metamodel-based algorithm.  相似文献   

6.
As a robust method against model deviation we consider a pre-test estimation function. To optimize a continuous design for this problem we give an asymptotic risk matrix for the quadratic loss. The risk will then be given by an isotonic criterion function of the asymptotic risk matrix. As an optimization criterion we look for a design that minimizes the maximal risk in the deviation model under the restriction that the risk in the original model does not exceed a given bound. This optimization problem will be solved for the polynomial regression, the deviation consisting in one additional regression function and the criterion function being the determinant.  相似文献   

7.
In this paper, an optimization model is developed for the economic design of a rectifying inspection sampling plan in the presence of two markets. A product with a normally distributed quality characteristic with unknown mean and variance is produced in the process. The quality characteristic has a lower specification limit. The aim of this paper is to maximize the profit, which consists the Taguchi loss function, under the constraints of satisfying the producer's and consumer's risk in two different markets simultaneously. Giveaway cost per unit of sold excess material is considered in the proposed model. A case study is presented to illustrate the application of proposed methodology. In addition, sensitivity analysis is performed to study the effect of model parameters on the expected profit and optimal solution. Optimal process adjustment problem and acceptance sampling plan is combined in the economical optimization model. Also, process mean and standard deviation are assumed to be unknown value, and their impact is analyzed. Finally, inspection error is considered, and its impact is investigated and analyzed.  相似文献   

8.
We give a new characterization of Elfving's (1952) method for computing c-optimal designs in k dimensions which gives explicit formulae for the k unknown optimal weights and k unknown signs in Elfving's characterization. This eliminates the need to search over these parameters to compute c-optimal designs, and thus reduces the computational burden from solving a family of optimization problems to solving a single optimization problem for the optimal finite support set. We give two illustrative examples: a high dimensional polynomial regression model and a logistic regression model, the latter showing that the method can be used for locally optimal designs in nonlinear models as well.  相似文献   

9.
This paper proposes an adaptive model selection criterion with a data-driven penalty term. We treat model selection as an equality constrained minimization problem and develop an adaptive model selection procedure based on the Lagrange optimization method. In contrast to Akaike's information criterion (AIC), Bayesian information criterion (BIC) and most other existing criteria, this new criterion is to minimize the model size and take a measure of lack-of-fit as an adaptive penalty. Both theoretical results and simulations illustrate the power of this criterion with respect to consistency and pointwise asymptotic loss efficiency in the parametric and nonparametric cases.  相似文献   

10.
In this paper, we present an adaptive evolutionary Monte Carlo algorithm (AEMC), which combines a tree-based predictive model with an evolutionary Monte Carlo sampling procedure for the purpose of global optimization. Our development is motivated by sensor placement applications in engineering, which requires optimizing certain complicated “black-box” objective function. The proposed method is able to enhance the optimization efficiency and effectiveness as compared to a few alternative strategies. AEMC falls into the category of adaptive Markov chain Monte Carlo (MCMC) algorithms and is the first adaptive MCMC algorithm that simulates multiple Markov chains in parallel. A theorem about the ergodicity property of the AEMC algorithm is stated and proven. We demonstrate the advantages of the proposed method by applying it to a sensor placement problem in a manufacturing process, as well as to a standard Griewank test function.  相似文献   

11.
Multivariable optimization under large data environment concerns with how to reliably obtain a set of optimization results from a mass of data that influence the object function complexly. This is an important issue in statistical calculation because the complexity between variable parameters leads to repeated statistical calculation analysis and a significant amount of data waste. A statistical multivariable optimization method using improved orthogonal algorithm based on large data is proposed. Considering the optimization problem with multi-parameters under large data environment, a multi-parameter optimization model used for improved orthogonal algorithm is established based on large data. Furthermore, an extensive simulation study on temperature field distribution of anti-/de-icing component was conducted to verify the validity of the statistical calculation analysis optimization method. The optimized temperature field distribution meets the anti-/de-icing requirements through numerical simulation. Simulation results show that the optimization effect is more evident and accurate than the non-optimized temperature distribution with the optimized results of the proposed method. Results verify the effectiveness of the proposed method.  相似文献   

12.
After reading a few articles in the nonlinear econonetric literature one begins to notice that each discussion follows roughly the same lines as the classical treatment of maximum likelihood estimation. There are some technical problems having to do with simultaneously conditioning on the exogenous variables and subjecting the true parameter to a Pittman drift which prevent the use of the classical methods of proof but the basic impression of similarity is correct . An estimator – be it nonlinear least squares, three – stage nonlinear least squares, or whatever – is the solution of an optimization problem. And the objective function of the optimization problem can be treated as if it were the likelihood to derive the Wald test statistic, the likelihood ratio test statistic , and Rao's efficient score statistic. Their asymptotic null and non – null distributions can be found using arguments fairly similar to the classical maximum likelihood arguments. In this article we exploit these observations and unify much of the nonlinear econometric literature. That which escapes this unificationis that which has an objective function which is not twice continuously differentiable with respect to the parameters – minimum absolute deviations regression for example.

The model which generates the data need not bethe same as the model which was presumed to define the optimization problem. Thus, these results can be used to obtain the asymptotic behavior of inference procedures under specification error We think that this will prove to be the nost useful feature of the paper. For example, it i s not necessary toresortto Monte Carlo simulat ionto determine i f a Translog estimate of an elasticity of sub stitution obtained by nonlinear three-stage least squares is robust against a CES truestate of nature. The asymptotic approximations we give here w ill provide an analytic answer to the question, sufficiently accurate for most purposes.  相似文献   

13.
The structured total least squares estimator, defined via a constrained optimization problem, is a generalization of the total least squares estimator when the data matrix and the applied correction satisfy given structural constraints. In the paper, an affine structure with additional assumptions is considered. In particular, Toeplitz and Hankel structured, noise free and unstructured blocks are allowed simultaneously in the augmented data matrix. An equivalent optimization problem is derived that has as decision variables only the estimated parameters. The cost function of the equivalent problem is used to prove consistency of the structured total least squares estimator. The results for the general affine structured multivariate model are illustrated by examples of special models. Modification of the results for block-Hankel/Toeplitz structures is also given. As a by-product of the analysis of the cost function, an iterative algorithm for the computation of the structured total least squares estimator is proposed.  相似文献   

14.
An investment and consumption problem is formulated and its optimal strategy is investigated. We assume the basic binary model, but with unknown parameters. We apply the parametric Bayesian approach to formulate the problem as a sequential stochastic optimization model and use the technique of dynamic programming to characterize the optimal strategy. It is discovered that despite unknown parameters, when the power and logarithmic utility functions are treated, the optimal value function is of the same form of the utility function. The random finite horizon model is formulated as an infinite horizon model. Our results are similar to the ones in the literature having different return functions with constant relative risk aversion.  相似文献   

15.
This paper details a method for estimating the unknown parameters of a regression model when the estimates of the dependent variable should be embedded in an input–output table with accounting constraints. Since in regression modelling the dependent variable is usually transformed either to achieve homoscedasticity of the residuals or for a better interpretation of the model, the estimating procedure becomes an optimization problem of an opportunely defined Lagrangian function with non-linear constraints. After detailing the algorithm and deriving the asymptotic distribution of the restricted estimator, the methodology is applied to estimate the flows of tourism within and between Italian regions with a gravity model. The procedure can be seen as an extension of Byron’s (J R Stat Soc Ser A 141:359–367, 1978) balancing method.  相似文献   

16.
ABSTRACT

This article is concerned with the problem of controlling a simple immigration-birth-death process, which represents a pest population, by the introduction of a predator in the habitat of the pests. The optimization criterion is the minimization of the expected long-run average cost per unit time. It is possible to construct an appropriate semi-Markov decision model with a finite set of states if and only if the difference between the per capita birth rate and the per capita death rate of the pests is smaller than half of the rate at which the predator is introduced in the habitat.  相似文献   

17.
提出一种模糊多分配p枢纽站中位问题,其中运输成本定义为模糊变量,问题的目标函数是在给定的可信性水平下,最小化总的运输成本。对于梯形和正态运输成本,问题等价于确定的混合整数线性规划问题。在实证分析中,选取了辽宁省煤炭产业的相关面板数据,分析计算在不同可信度水平下煤炭运输枢纽站设立的数量和位置,再利用传统的优化方法(如分枝定界法)求解。经计算,这一模型和求解方法可以用来解决辽宁省煤炭运输的选址问题。  相似文献   

18.
The economic and statistical merits of a multiple variable sampling intervals scheme are studied. The problem is formulated as a double-objective optimization problem with the adjusted average time to signal as the statistical objective and the expected cost per hour as the economic objective. Bai and Lee's [An economic design of variable sampling interval ¯X control charts. Int J Prod Econ. 1998;54:57–64] economic model is considered. Then we find the Pareto-optimal designs in which the two objectives are minimized simultaneously by using the non-dominated sorting genetic algorithm. Through an illustrative example, the advantages of the proposed approach are shown by providing a list of viable optimal solutions and graphical representations, which indicate the advantage of flexibility and adaptability of our approach.  相似文献   

19.
The analysis of survival endpoints subject to right-censoring is an important research area in statistics, particularly among econometricians and biostatisticians. The two most popular semiparametric models are the proportional hazards model and the accelerated failure time (AFT) model. Rank-based estimation in the AFT model is computationally challenging due to optimization of a non-smooth loss function. Previous work has shown that rank-based estimators may be written as solutions to linear programming (LP) problems. However, the size of the LP problem is O(n 2+p) subject to n 2 linear constraints, where n denotes sample size and p denotes the dimension of parameters. As n and/or p increases, the feasibility of such solution in practice becomes questionable. Among data mining and statistical learning enthusiasts, there is interest in extending ordinary regression coefficient estimators for low-dimensions into high-dimensional data mining tools through regularization. Applying this recipe to rank-based coefficient estimators leads to formidable optimization problems which may be avoided through smooth approximations to non-smooth functions. We review smooth approximations and quasi-Newton methods for rank-based estimation in AFT models. The computational cost of our method is substantially smaller than the corresponding LP problem and can be applied to small- or large-scale problems similarly. The algorithm described here allows one to couple rank-based estimation for censored data with virtually any regularization and is exemplified through four case studies.  相似文献   

20.
Estimators are often defined as the solutions to data dependent optimization problems. A common form of objective function (function to be optimized) that arises in statistical estimation is the sum of a convex function V and a quadratic complexity penalty. A standard paradigm for creating kernel-based estimators leads to such an optimization problem. This article describes an optimization algorithm designed for unconstrained optimization problems in which the objective function is the sum of a non negative convex function and a known quadratic penalty. The algorithm is described and compared with BFGS on some penalized logistic regression and penalized L 3/2 regression problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号