首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The properties of a parameterized form of generalized simulated annealing for function minimization are investigated by studying the properties of repeated minimizations from random starting points. This leads to the comparison of distributions of function values and of numbers of function evaluations. Parameter values which yield searches repeatedly terminating close to the global minimum may require unacceptably many function evaluations. If computational resources are a constraint, the total number of function evaluations may be limited. A sensible strategy is then to restart at a random point any search which terminates, until the total allowable number of function evaluations has been exhausted. The response is now the minimum of the function values obtained. This strategy yields a surprisingly stable solution for the parameter values of the simulated annealing algorithm. The algorithm can be further improved by segmentation in which each search is limited to a maximum number of evaluations, perhaps no more than a fifth of the total available. The main tool for interpreting the distributions of function values is the boxplot. The application is to the optimum design of experiments.  相似文献   

2.
A simulated annealing algorithm is presented that finds the minimum cost redundancy allocation subject to meeting a minimal reliability requirement for a coherent system of components. It is assumed that components are independent of one another, and that the form of the nominal system reliability function is available for input to the algorithm  相似文献   

3.
The Expectation–Maximization (EM) algorithm is a very popular technique for maximum likelihood estimation in incomplete data models. When the expectation step cannot be performed in closed form, a stochastic approximation of EM (SAEM) can be used. Under very general conditions, the authors have shown that the attractive stationary points of the SAEM algorithm correspond to the global and local maxima of the observed likelihood. In order to avoid convergence towards a local maxima, a simulated annealing version of SAEM is proposed. An illustrative application to the convolution model for estimating the coefficients of the filter is given.  相似文献   

4.
This paper investigates the applicability of a Monte Carlo technique known as simulated annealing to achieve optimum or sub-optimum decompositions of probabilistic networks under bounded resources. High-quality decompositions are essential for performing efficient inference in probabilistic networks. Optimum decomposition of probabilistic networks is known to be NP-hard (Wen, 1990). The paper proves that cost-function changes can be computed locally, which is essential to the efficiency of the annealing algorithm. Pragmatic control schedules which reduce the running time of the annealing algorithm are presented and evaluated. Apart from the conventional temperature parameter, these schedules involve the radius of the search space as a new control parameter. The evaluation suggests that the inclusion of this new parameter is important for the success of the annealing algorithm for the present problem.  相似文献   

5.
We describe an image reconstruction problem and the computational difficulties arising in determining the maximum a posteriori (MAP) estimate. Two algorithms for tackling the problem, iterated conditional modes (ICM) and simulated annealing, are usually applied pixel by pixel. The performance of this strategy can be poor, particularly for heavily degraded images, and as a potential improvement Jubb and Jennison (1991) suggest the cascade algorithm in which ICM is initially applied to coarser images formed by blocking squares of pixels. In this paper we attempt to resolve certain criticisms of cascade and present a version of the algorithm extended in definition and implementation. As an illustration we apply our new method to a synthetic aperture radar (SAR) image. We also carry out a study of simulated annealing, with and without cascade, applied to a more tractable minimization problem from which we gain insight into the properties of cascade algorithms.  相似文献   

6.
Summary. The classical approach to statistical analysis is usually based upon finding values for model parameters that maximize the likelihood function. Model choice in this context is often also based on the likelihood function, but with the addition of a penalty term for the number of parameters. Though models may be compared pairwise by using likelihood ratio tests for example, various criteria such as the Akaike information criterion have been proposed as alternatives when multiple models need to be compared. In practical terms, the classical approach to model selection usually involves maximizing the likelihood function associated with each competing model and then calculating the corresponding criteria value(s). However, when large numbers of models are possible, this quickly becomes infeasible unless a method that simultaneously maximizes over both parameter and model space is available. We propose an extension to the traditional simulated annealing algorithm that allows for moves that not only change parameter values but also move between competing models. This transdimensional simulated annealing algorithm can therefore be used to locate models and parameters that minimize criteria such as the Akaike information criterion, but within a single algorithm, removing the need for large numbers of simulations to be run. We discuss the implementation of the transdimensional simulated annealing algorithm and use simulation studies to examine its performance in realistically complex modelling situations. We illustrate our ideas with a pedagogic example based on the analysis of an autoregressive time series and two more detailed examples: one on variable selection for logistic regression and the other on model selection for the analysis of integrated recapture–recovery data.  相似文献   

7.
Genetic algorithms (GAs) are adaptive search techniques designed to find near-optimal solutions of large scale optimization problems with multiple local maxima. Standard versions of the GA are defined for objective functions which depend on a vector of binary variables. The problem of finding the maximum a posteriori (MAP) estimate of a binary image in Bayesian image analysis appears to be well suited to a GA as images have a natural binary representation and the posterior image probability is a multi-modal objective function. We use the numerical optimization problem posed in MAP image estimation as a test-bed on which to compare GAs with simulated annealing (SA), another all-purpose global optimization method. Our conclusions are that the GAs we have applied perform poorly, even after adaptation to this problem. This is somewhat unexpected, given the widespread claims of GAs' effectiveness, but it is in keeping with work by Jennison and Sheehan (1995) which suggests that GAs are not adept at handling problems involving a great many variables of roughly equal influence.We reach more positive conclusions concerning the use of the GA's crossover operation in recombining near-optimal solutions obtained by other methods. We propose a hybrid algorithm in which crossover is used to combine subsections of image reconstructions obtained using SA and we show that this algorithm is more effective and efficient than SA or a GA individually.  相似文献   

8.
9.
The construction of optimal designs for change-over experiments requires consideration of the two component treatment designs: one for the direct treatments and the other for the residual (carry-over) treatments. A multi-objective approach is introduced using simulated annealing, which simultaneously optimises each of the component treatment designs to produce a set of dominant designs in one run of the algorithm. The algorithm is used to demonstrate that a wide variety of change-over designs can be generated quickly on a desk top computer. These are generally better than those previously recorded in the literature.  相似文献   

10.
Backsolving is a class of methods that generate simulated values for exogenous forcing processes in a stochastic equilibrium model from specified assumed distributions for Euler-equation disturbances. It can be thought of as a way to force the approximation error generated by inexact choice of decision rule or boundary condition into distortions of the distribution of the exogenous shocks in the simulations rather than into violations of the Euler equations as with standard approaches. Here it is applied to a one-sector neoclassical growth model with decision rule generated from a linear-quadratic approximation.  相似文献   

11.
Recent work on the assignment problem is surveyed with the aim of illustrating the contribution that stochastic thinking can make to problems of interest to computer scientists. The assignment problem is thus examined in connection with the analysis of greedy algorithms, marriage lemmas, linear programming with random costs, randomization based matching, stochastic programming, and statistical mechanics. (The survey is based on the invited presentation given during the “Statistics Days at FSU” in March 1990.)  相似文献   

12.
Streaming feature selection is a greedy approach to variable selection that evaluates potential explanatory variables sequentially. It selects significant features as soon as they are discovered rather than testing them all and picking the best one. Because it is so greedy, streaming selection can rapidly explore large collections of features. If significance is defined by an alpha investing protocol, then the rate of false discoveries will be controlled. The focus of attention in variable selection, however, should be on fit rather than hypothesis testing. Little is known, however, about the risk of estimators produced by streaming selection and how the configuration of these estimators influences the risk. To meet these needs, we provide a computational framework based on stochastic dynamic programming that allows fast calculation of the minimax risk of a sequential estimator relative to an alternative. The alternative can be data driven or derived from an oracle. This framework allows us to compute and contrast the risk inflation of sequential estimators derived from various alpha investing rules. We find that a universal investing rule performs well over a variety of models and that estimators allowed to have larger than conventional rates of false discoveries produce generally smaller risk.  相似文献   

13.
We compare the performances of the simulated annealing and the EM algorithms in problems of decomposition of normal mixtures according to the likelihood approach. In this case the likelihood function has multiple maxima and singularities, and we consider a suitable reformulation of the problem which yields an optimization problem having a global solution and at least a smaller number of spurious maxima. The results are compared considering some distance measures between the estimated distributions and the true ones. No overwhelming superiority of either method has been demonstrated, though in one of our cases simulated annealing achieved better results.  相似文献   

14.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   

15.
This paper presents a new test statistic for dynamic or stochastic mis-specification for the dynamic demand or dynamic adjustment class of economic models. The test statistic is based on residual autocorrelations, asymptotically X2 and is suspected to be of low power. The test is illustrated with an example from recent econometric literature.  相似文献   

16.
    
This article applies the methods of stochastic dynamic programming to a risk management problem, where an agent hedges her derivative position by submitting limit orders. Therefore, this model is the first, in the literature on optimal trading with limit orders, to handle a problem of hedging options or other derivatives. A hedging strategy is developed where both the size and the limit price of each order is optimally set.  相似文献   

17.
Abstract

Reliability is a major concern in the process of software development because unreliable software can cause failure in the computer system that can be hazardous. A way to enhance the reliability of software is to detect and remove the faults during the testing phase, which begins with module testing wherein modules are tested independently to remove a substantial number of faults within a limited resource. Therefore, the available resource must be allocated among the modules in such a way that the number of faults is removed as much as possible from each of the modules to achieve higher software reliability. In this article, we discuss the problem of optimal resource allocation of the testing resource for a modular software system, which maximizes the number of faults removed subject to the conditions that the amount of testing-effort is fixed, a certain percentage of faults is to be removed and a desired level of reliability is to be achieved. The problem is formulated as a non linear programming problem (NLPP), which is modeled by the inflection S-shaped software reliability growth models (SRGM) based on a non homogeneous Poisson process (NHPP) which incorporates the exponentiated Weibull (EW) testing-effort functions. A solution procedure is then developed using a dynamic programming technique to solve the NLPP. Furthermore, three special cases of optimum resource allocations are also discussed. Finally, numerical examples using three sets of software failure data are presented to illustrate the procedure developed and to validate the performance of the strategies proposed in this article. Experimental results indicate that the proposed strategies may be helpful to software project managers for making the best decisions in allocating the testing resource. In addition, the results are compared with those of Kapur et al. (2004), Huang and Lyu (2005), and Jha et al. (2010) that are available in the literature to deal the similar problems addressed in this article. It reveals that the proposed dynamic programming method for the testing-resource allocation problem yields a gain in efficiency over other methods.  相似文献   

18.
Modelling age-specific fertility rates is of great importance in demography because of their influence on population growth. Although we have a variety of fertility models in the demographic literature, most of them do not have any demographic interpretation for their parameters. It is generally expected that models with behavioural interpretation are more universal than those without any interpretation. Even though the famous Gompertz model has some behavioural interpretation it suffers from other drawbacks. In the present work, we propose a new fertility model, which has its genesis in the generalization of logistic law. The proposed model has good behavioural interpretation, alongside having nice parameter interpretations.  相似文献   

19.
20.
Summary. A dynamic treatment regime is a list of decision rules, one per time interval, for how the level of treatment will be tailored through time to an individual's changing status. The goal of this paper is to use experimental or observational data to estimate decision regimes that result in a maximal mean response. To explicate our objective and to state the assumptions, we use the potential outcomes model. The method proposed makes smooth parametric assumptions only on quantities that are directly relevant to the goal of estimating the optimal rules. We illustrate the methodology proposed via a small simulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号