首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
We present the parallel and interacting stochastic approximation annealing (PISAA) algorithm, a stochastic simulation procedure for global optimisation, that extends and improves the stochastic approximation annealing (SAA) by using population Monte Carlo ideas. The efficiency of standard SAA algorithm crucially depends on its self-adjusting mechanism which presents stability issues in high dimensional or rugged optimisation problems. The proposed algorithm involves simulating a population of SAA chains that interact each other in a manner that significantly improves the stability of the self-adjusting mechanism and the search for the global optimum in the sampling space, as well as it inherits SAA desired convergence properties when a square-root cooling schedule is used. It can be implemented in parallel computing environments in order to mitigate the computational overhead. As a result, PISAA can address complex optimisation problems that it would be difficult for SAA to satisfactory address. We demonstrate the good performance of the proposed algorithm on challenging applications including Bayesian network learning and protein folding. Our numerical comparisons suggest that PISAA outperforms the simulated annealing, stochastic approximation annealing, and annealing evolutionary stochastic approximation Monte Carlo.  相似文献   

2.
A range of procedures in both robustness and diagnostics require optimisation of a target functional over all subsamples of given size. Whereas such combinatorial problems are extremely difficult to solve exactly, something less than the global optimum can be ‘good enough’ for many practical purposes, as shown by example. Again, a relaxation strategy embeds these discrete, high-dimensional problems in continuous, low-dimensional ones. Overall, nonlinear optimisation methods can be exploited to provide a single, reasonably fast algorithm to handle a wide variety of problems of this kind, thereby providing a certain unity. Four running examples illustrate the approach. On the robustness side, algorithmic approximations to minimum covariance determinant (MCD) and least trimmed squares (LTS) estimation. And, on the diagnostic side, detection of multiple multivariate outliers and global diagnostic use of the likelihood displacement function. This last is developed here as a global complement to Cook’s (in J. R. Stat. Soc. 48:133–169, 1986) local analysis. Appropriate convergence of each branch of the algorithm is guaranteed for any target functional whose relaxed form is—in a natural generalisation of concavity, introduced here—‘gravitational’. Again, its descent strategy can downweight to zero contaminating cases in the starting position. A simulation study shows that, although not optimised for the LTS problem, our general algorithm holds its own with algorithms that are so optimised. An adapted algorithm relaxes the gravitational condition itself.  相似文献   

3.
Dynamic programming (DP) is a fast, elegant method for solving many one-dimensional optimisation problems but, unfortunately, most problems in image analysis, such as restoration and warping, are two-dimensional. We consider three generalisations of DP. The first is iterated dynamic programming (IDP), where DP is used to recursively solve each of a sequence of one-dimensional problems in turn, to find a local optimum. A second algorithm is an empirical, stochastic optimiser, which is implemented by adding progressively less noise to IDP. The final approach replaces DP by a more computationally intensive Forward-Backward Gibbs Sampler, and uses a simulated annealing cooling schedule. Results are compared with existing pixel-by-pixel methods of iterated conditional modes (ICM) and simulated annealing in two applications: to restore a synthetic aperture radar (SAR) image, and to warp a pulsed-field electrophoresis gel into alignment with a reference image. We find that IDP and its stochastic variant outperform the remaining algorithms.  相似文献   

4.
This work explores the use of sequential and batch Monte Carlo techniques to solve the nonlinear model predictive control (NMPC) problem with stochastic system dynamics and noisy state observations. This is done by treating the state inference and control optimisation problems jointly as a single artificial inference problem on an augmented state-control space. The methodology is demonstrated on the benchmark car-up-the-hill problem as well as an advanced F-16 aircraft terrain following problem.  相似文献   

5.
Suppose some quantiles of the prior distribution of a nonnegative parameter θ are specified. Instead of eliciting just one prior density function, consider the class Γ of all the density functions compatible with the quantile specification. Given a likelihood function, find the posterior upper and lower bounds for the expected value of any real-valued function h(θ), as the density varies in Γ. Such a scheme agrees with a robust Bayesian viewpoint. Under mild regularity conditions about h(θ) and the likelihood, a procedure for finding bounds is derived and applied to an example, after transforming the given functional optimisation problems into finite-dimensional ones.  相似文献   

6.
Jones  B.  Wang  J. 《Statistics and Computing》1999,9(3):209-218
We consider some computational issues that arise when searching for optimal designs for pharmacokinetic (PK) studies. Special factors that distinguish these are (i) repeated observations are taken from each subject and the observations are usually described by a nonlinear mixed model (NLMM), (ii) design criteria depend on the model fitting procedure, (iii) in addition to providing efficient parameter estimates, the design must also permit model checking, (iv) in practice there are several design constraints, (v) the design criteria are computationally expensive to evaluate and often numerical integration is needed and finally (vi) local optimisation procedures may fail to converge or get trapped at local optima.We review current optimal design algorithms and explore the possibility of using global optimisation procedures. We use these latter procedures to find some optimal designs.For multi-purpose designs we suggest two surrogate design criteria for model checking and illustrate their use.  相似文献   

7.
Continuing increases in computing power and availability mean that many maximum likelihood estimation (MLE) problems previously thought intractable or too computationally difficult can now be tackled numerically. However, ML parameter estimation for distributions whose only analytical expression is as quantile functions has received little attention. Numerical MLE procedures for parameters of new families of distributions, the g-and-k and the generalized g-and-h distributions, are presented and investigated here. Simulation studies are included, and the appropriateness of using asymptotic methods examined. Because of the generality of these distributions, the investigations are not only into numerical MLE for these distributions, but are also an initial investigation into the performance and problems for numerical MLE applied to quantile-defined distributions in general. Datasets are also fitted using the procedures here. Results indicate that sample sizes significantly larger than 100 should be used to obtain reliable estimates through maximum likelihood.  相似文献   

8.
Balanced Incomplete Block Designs have been employed as row-column designs by a number of researchers. In this paper necessary and sufficient conditions for the connectedness of such designs are obtained, and methods for their optimisation are presented. The optimal design is shown to be always connected.  相似文献   

9.
The work of this paper is based on the innovative approach of Feigin et al. (1983), who estimate parameters of lifetime distributions by equating empirical and theoretical Laplace transforms. We show that the optimal choice of the transform variable depends critically upon the number of sampling times, the way they are spaced, and how the empirical transform is formed. Two new approaches for choosing the transform variable, viz. using cross-validation or constrained optimisation, are introduced and shown to have potential for wide-ranging use.  相似文献   

10.
Probabilistic graphical models offer a powerful framework to account for the dependence structure between variables, which is represented as a graph. However, the dependence between variables may render inference tasks intractable. In this paper, we review techniques exploiting the graph structure for exact inference, borrowed from optimisation and computer science. They are built on the principle of variable elimination whose complexity is dictated in an intricate way by the order in which variables are eliminated. The so‐called treewidth of the graph characterises this algorithmic complexity: low‐treewidth graphs can be processed efficiently. The first point that we illustrate is therefore the idea that for inference in graphical models, the number of variables is not the limiting factor, and it is worth checking the width of several tree decompositions of the graph before resorting to the approximate method. We show how algorithms providing an upper bound of the treewidth can be exploited to derive a ‘good' elimination order enabling to realise exact inference. The second point is that when the treewidth is too large, algorithms for approximate inference linked to the principle of variable elimination, such as loopy belief propagation and variational approaches, can lead to accurate results while being much less time consuming than Monte‐Carlo approaches. We illustrate the techniques reviewed in this article on benchmarks of inference problems in genetic linkage analysis and computer vision, as well as on hidden variables restoration in coupled Hidden Markov Models.  相似文献   

11.
Fitting general stable laws to data by maximum likelihood is important but difficult. This is why much research has considered alternative procedures based on empirical characteristic functions. Two problems then are how many values of the characteristic function to select, and how to position them. We provide recommendations for both of these topics. We propose an arithmetic spacing of transform variables, coupled with a recommendation for the location of the variables. It is shown that arithmetic spacing, which is far simpler to implement, closely approximates optimum spacing. The new methods that result are compared in simulation studies with existing methods, including maximum-likelihood. The main conclusion is that arithmetic spacing of the values of the characteristic function, coupled with appropriately limiting the range for these values, improves the overall performance of the regression-type method of Koutrouvelis, which is the standard procedure for estimating general stable law parameters.  相似文献   

12.
An efficient probabilistic approximation to the von Mises likelihood is used to allow fitting via GLIM. This permits the full use of covariate information in analysis of directional data. A numerical integration is performed to evaluate the normalising constant (IO) and the method is compared to standard numerical optimisation routines from the IMSL library.  相似文献   

13.
Unfortunately many of the numerous algorithms for computing the comulative distribution function (cdf) and noncentrality parameter of the noncentral F and beta distributions can produce completely incorrect results as demonstrated in the paper by examples. Existing algorithms are scrutinized and those parts that involve numerical difficulties are identified. As a result, a pseudo code is presented in which all the known numerical problems are resolved. This pseudo code can be easily implemented in programming language C or FORTRAN without understanding the complicated mathematical background. Symbolic evaluation of a finite and closed formula is proposed to compute exact cdf values. This approach makes it possible to check quickly and reliably the values returned by professional statistical packages over an extraordinarily wide parameter range without any programming knowledge. This research was motivated by the fact that a very useful table for calculating the size of detectable effects for ANOVA tables contains suspect values in the region of large noncentrality parameter values compared to the values obtained by Patnaik’s 2-moment central-F approximation. The cause is identified and the corrected form of the table for ANOVA purposes is given. The accuracy of the approximations to the noncentral-F distribution is also discussed. The authors wish to thank Mr. Richárd Király for his preliminary work. The authors are grateful to the Editor and Associate Editor of STCO and the unknown reviewers for their helpful suggestions.  相似文献   

14.
For the nonparametric estimation of multivariate finite mixture models with the conditional independence assumption, we propose a new formulation of the objective function in terms of penalised smoothed Kullback–Leibler distance. The nonlinearly smoothed majorisation-minimisation (NSMM) algorithm is derived from this perspective. An elegant representation of the NSMM algorithm is obtained using a novel projection-multiplication operator, a more precise monotonicity property of the algorithm is discovered, and the existence of a solution to the main optimisation problem is proved for the first time.  相似文献   

15.
One standard summary of a clinical trial is a confidence limit for the effect of the treatment. Unfortunately, standard approximate limits may have poor frequentist properties, even for quite large sample sizes. It has been known since Buehler (1957 Buehler, R. J. (1957). Confidence intervals for the product of two binomial parameters. Journal of Computational and Graphical Statistics 52:482493. [Google Scholar]) that an imperfect confidence limit can be adjusted to have exact coverage. These “tight” limits are the gold standard frequentist confidence limit. Computing tight limits requires exact calculation of certain tail probabilities and optimisation of potentially erratic functions of the nuisance parameter. Naive implementation is both computationally unreliable and highly burdensome, and perhaps explains why they are not in common use. For clinical trials however, where the data and parameter have dimension two, the difficulties can be fully surmounted. This paper brings together several results in the area and applies them to simple two dimensional problems. It is shown how to reduce the computational burden by an order of magnitude. Difficulties with the optimisation reliability are mitigated by applying two different computational strategies, which tend to break down under different conditions, and taking the less stringent of the two computed limits. This paper specifically develops limits for the relative risk in a clinical trial, but it should be clear to the reader that the method extends to arbitrary measures of treatment effect without essential modification.  相似文献   

16.
We construct approximate optimal designs for minimising absolute covariances between least‐squares estimators of the parameters (or linear functions of the parameters) of a linear model, thereby rendering relevant parameter estimators approximately uncorrelated with each other. In particular, we consider first the case of the covariance between two linear combinations. We also consider the case of two such covariances. For this we first set up a compound optimisation problem which we transform to one of maximising two functions of the design weights simultaneously. The approaches are formulated for a general regression model and are explored through some examples including one practical problem arising in chemistry.  相似文献   

17.
In this paper we present a method for performing regression with stable disturbances. The method of maximum likelihood is used to estimate both distribution and regression parameters. Our approach utilises a numerical integration procedure to calculate the stable density, followed by sequential quadratic programming optimisation procedures to obtain estimates and standard errors. A theoretical justification for the use of stable law regression is given followed by two real world practical examples of the method. First, we fit the stable law multiple regression model to housing price data and examine how the results differ from normal linear regression. Second, we calculate the beta coefficients for 26 companies from the Financial Times Ordinary Shares Index.  相似文献   

18.
The number ofl-overlapping success runs of lengthk inn trials, which was introduced and studied recently, is presently reconsidered in the Bernoulli case and two exact formulas are derived for its probability distribution function in terms of multinomial and binomial coefficients respectively. A recurrence relation concerning this distribution, as well as its mean, is also obtained. Furthermore, the number ofl-overlapping success runs of lengthk inn Bernoulli trials arranged on a circle is presently considered for the first time and its probability distribution function and mean are derived. Finally, the latter distribution is related to the first, two open problems regarding limiting distributions are stated, and numerical illustrations are given in two tables. All results are new and they unify and extend several results of various authors on binomial and circular binomial distributions of orderk.  相似文献   

19.
Additive models provide an attractive setup to estimate regression functions in a nonparametric context. They provide a flexible and interpretable model, where each regression function depends only on a single explanatory variable and can be estimated at an optimal univariate rate. Most estimation procedures for these models are highly sensitive to the presence of even a small proportion of outliers in the data. In this paper, we show that a relatively simple robust version of the backfitting algorithm (consisting of using robust local polynomial smoothers) corresponds to the solution of a well-defined optimisation problem. This formulation allows us to find mild conditions to show Fisher consistency and to study the convergence of the algorithm. Our numerical experiments show that the resulting estimators have good robustness and efficiency properties. We illustrate the use of these estimators on a real data set where the robust fit reveals the presence of influential outliers.  相似文献   

20.
The conventional Cox proportional hazards regression model contains a loglinear relative risk function, linking the covariate information to the hazard ratio with a finite number of parameters. A generalization, termed the partly linear Cox model, allows for both finite dimensional parameters and an infinite dimensional parameter in the relative risk function, providing a more robust specification of the relative risk function. In this work, a likelihood based inference procedure is developed for the finite dimensional parameters of the partly linear Cox model. To alleviate the problems associated with a likelihood approach in the presence of an infinite dimensional parameter, the relative risk is reparameterized such that the finite dimensional parameters of interest are orthogonal to the infinite dimensional parameter. Inference on the finite dimensional parameters is accomplished through maximization of the profile partial likelihood, profiling out the infinite dimensional nuisance parameter using a kernel function. The asymptotic distribution theory for the maximum profile partial likelihood estimate is established. It is determined that this estimate is asymptotically efficient; the orthogonal reparameterization enables employment of profile likelihood inference procedures without adjustment for estimation of the nuisance parameter. An example from a retrospective analysis in cancer demonstrates the methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号