首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A recent comparison of evolutionary, neural network, and scatter search heuristics for solving the p-median problem is completed by (i) gathering or obtaining exact optimal values in order to evaluate errors precisely, and (ii) including results obtained with several variants of a variable neighborhood search (VNS) heuristic. For a first, well-known, series of instances, the average errors of the evolutionary and neural network heuristics are over 10% and more than 1000 times larger than that of VNS. For a second series, this error is about 3% while the errors of the parallel VNS and of a hybrid heuristic are about 0.01% and that of parallel scatter search even smaller.  相似文献   

2.
In many linear inverse problems the unknown function f (or its discrete approximation Θ p×1), which needs to be reconstructed, is subject to the non negative constraint(s); we call these problems the non negative linear inverse problems (NNLIPs). This article considers NNLIPs. However, the error distribution is not confined to the traditional Gaussian or Poisson distributions. We adopt the exponential family of distributions where Gaussian and Poisson are special cases. We search for the non negative maximum penalized likelihood (NNMPL) estimate of Θ. The size of Θ often prohibits direct implementation of the traditional methods for constrained optimization. Given that the measurements and point-spread-function (PSF) values are all non negative, we propose a simple multiplicative iterative algorithm. We show that if there is no penalty, then this algorithm is almost sure to converge; otherwise a relaxation or line search is necessitated to assure its convergence.  相似文献   

3.
Genetic algorithms (GAs) are adaptive search techniques designed to find near-optimal solutions of large scale optimization problems with multiple local maxima. Standard versions of the GA are defined for objective functions which depend on a vector of binary variables. The problem of finding the maximum a posteriori (MAP) estimate of a binary image in Bayesian image analysis appears to be well suited to a GA as images have a natural binary representation and the posterior image probability is a multi-modal objective function. We use the numerical optimization problem posed in MAP image estimation as a test-bed on which to compare GAs with simulated annealing (SA), another all-purpose global optimization method. Our conclusions are that the GAs we have applied perform poorly, even after adaptation to this problem. This is somewhat unexpected, given the widespread claims of GAs' effectiveness, but it is in keeping with work by Jennison and Sheehan (1995) which suggests that GAs are not adept at handling problems involving a great many variables of roughly equal influence.We reach more positive conclusions concerning the use of the GA's crossover operation in recombining near-optimal solutions obtained by other methods. We propose a hybrid algorithm in which crossover is used to combine subsections of image reconstructions obtained using SA and we show that this algorithm is more effective and efficient than SA or a GA individually.  相似文献   

4.
A transformation is proposed to convert the nonlinear constraints of the parameters in the mixture transition distribution (MTD) model into box-constraints. The proposed transformation removes the difficulties associated with the maximum likelihood estimation (MLE) process in the MTD modeling so that the MLEs of the parameters can be easily obtained via a hybrid algorithm from the evolutionary algorithms and/or quasi-Newton algorithms for global optimization. Simulation studies are conducted to demonstrate MTD modeling by the proposed novel approach through a global search algorithm in R environment. Finally, the proposed approach is used for the MTD modelings of three real data sets.  相似文献   

5.
ABSTRACT

Empirical likelihood (EL) is a nonparametric method based on observations. EL method is defined as a constrained optimization problem. The solution of this constrained optimization problem is carried on using duality approach. In this study, we propose an alternative algorithm to solve this constrained optimization problem. The new algorithm is based on a newton-type algorithm for Lagrange multipliers for the constrained optimization problem. We provide a simulation study and a real data example to compare the performance of the proposed algorithm with the classical algorithm. Simulation and the real data results show that the performance of the proposed algorithm is comparable with the performance of the existing algorithm in terms of efficiencies and cpu-times.  相似文献   

6.
The conditional tail expectation (CTE) is an indicator of tail behavior that takes into account both the frequency and magnitude of a tail event. However, the asymptotic normality of its empirical estimator requires that the underlying distribution possess a finite variance; this can be a strong restriction in actuarial and financial applications. A valuable alternative is the median shortfall (MS), although it only gives information about the frequency of a tail event. We construct a class of tail Lp-medians encompassing the MS and CTE. For p in (1,2), a tail Lp-median depends on both the frequency and magnitude of tail events, and its empirical estimator is, within the range of the data, asymptotically normal under a condition weaker than a finite variance. We extrapolate this estimator and another technique to extreme levels using the heavy-tailed framework. The estimators are showcased on a simulation study and on real fire insurance data.  相似文献   

7.
Zuo (2004) investigated the simplified replacement finite sample breakdown point of weighted L p -depth and L p -median for some appropriate weight functions. The addition breakdown point of weighted L p -depth functions is studied firstly in this article. In addition, for some other weight functions different from those in Zuo (2004 Zuo , Y. ( 2004 ). Robustness of weighted L p -depth and L p -median . Allgemeines Statistics Archiv. 88 : 215234 . [Google Scholar]), we establish the lower bounds of these two types of breakdown point of weighted L 2-median.  相似文献   

8.
We consider a family of effective and efficient strategies for generating experimental designs of several types with high efficiency. These strategies employ randomized search directions and at some stages allow the possibility of taking steps in a direction of decreasing efficiency in an effort to avoid local optima. Hence our strategies have some affinity with the simulated annealing algorithm of combinatorial optimization. The methods work well and compare favourably with other search strategies. We have implemented them for incomplete block designs, optionally resolvable, and for row-column designs.  相似文献   

9.
We consider the problem of deriving formal objective priors for the causal/stationary autoregressive model of order p. We compare the frequentist behaviour of the most common default priors, namely the uniform (over the stationarity region) prior, the Jeffreys’ prior and the reference prior.  相似文献   

10.
ABSTRACT

In actuarial applications, mixed Poisson distributions are widely used for modelling claim counts as observed data on the number of claims often exhibit a variance noticeably exceeding the mean. In this study, a new claim number distribution is obtained by mixing negative binomial parameter p which is reparameterized as p?=?exp( ?λ) with Gamma distribution. Basic properties of this new distribution are given. Maximum likelihood estimators of the parameters are calculated using the Newton–Raphson and genetic algorithm (GA). We compared the performance of these methods in terms of efficiency by simulation. A numerical example is provided.  相似文献   

11.
We propose three new statistics, Z p , C p , and R p for testing a p-variate (p ≥ 2) normal distribution and compare them with the prominent test statistics. We show that C p is overall most powerful and is effective against skew, long-tailed as well as short-tailed symmetric alternatives. We show that Z p and R p are most powerful against skew and long-tailed alternatives, respectively. The Z p and R p statistics can also be used for testing an assumed p-variate nonnormal distribution.  相似文献   

12.
Bayesian Additive Regression Trees (BART) is a statistical sum of trees model. It can be considered a Bayesian version of machine learning tree ensemble methods where the individual trees are the base learners. However, for datasets where the number of variables p is large the algorithm can become inefficient and computationally expensive. Another method which is popular for high-dimensional data is random forests, a machine learning algorithm which grows trees using a greedy search for the best split points. However, its default implementation does not produce probabilistic estimates or predictions. We propose an alternative fitting algorithm for BART called BART-BMA, which uses Bayesian model averaging and a greedy search algorithm to obtain a posterior distribution more efficiently than BART for datasets with large p. BART-BMA incorporates elements of both BART and random forests to offer a model-based algorithm which can deal with high-dimensional data. We have found that BART-BMA can be run in a reasonable time on a standard laptop for the “small n large p” scenario which is common in many areas of bioinformatics. We showcase this method using simulated data and data from two real proteomic experiments, one to distinguish between patients with cardiovascular disease and controls and another to classify aggressive from non-aggressive prostate cancer. We compare our results to their main competitors. Open source code written in R and Rcpp to run BART-BMA can be found at: https://github.com/BelindaHernandez/BART-BMA.git.  相似文献   

13.
We consider the problem of estimating and testing a general linear hypothesis in a general multivariate linear model, the so-called Growth Curve model, when the p × N observation matrix is normally distributed.

The maximum likelihood estimator (MLE) for the mean is a weighted estimator with the inverse of the sample covariance matrix which is unstable for large p close to N and singular for p larger than N. We modify the MLE to an unweighted estimator and propose new tests which we compare with the previous likelihood ratio test (LRT) based on the weighted estimator, i.e., the MLE. We show that the performance of these new tests based on the unweighted estimator is better than the LRT based on the MLE.  相似文献   


14.
Use of full Bayesian decision-theoretic approaches to obtain optimal stopping rules for clinical trial designs typically requires the use of Backward Induction. However, the implementation of Backward Induction, apart from simple trial designs, is generally impossible due to analytical and computational difficulties. In this paper we present a numerical approximation of Backward Induction in a multiple-arm clinical trial design comparing k experimental treatments with a standard treatment where patient response is binary. We propose a novel stopping rule, denoted by τ p , as an approximation of the optimal stopping rule, using the optimal stopping rule of a single-arm clinical trial obtained by Backward Induction. We then present an example of a double-arm (k=2) clinical trial where we use a simulation-based algorithm together with τ p to estimate the expected utility of continuing and compare our estimates with exact values obtained by an implementation of Backward Induction. For trials with more than two treatment arms, we evaluate τ p by studying its operating characteristics in a three-arm trial example. Results from these examples show that our approximate trial design has attractive properties and hence offers a relevant solution to the problem posed by Backward Induction.  相似文献   

15.
In this article, a new algorithm for rather expensive simulation problems is presented, which consists of two phases. In the first phase, as a model-based algorithm, the simulation output is used directly in the optimization stage. In the second phase, the simulation model is replaced by a valid metamodel. In addition, a new optimization algorithm is presented. To evaluate the performance of the proposed algorithm, it is applied to the (s,S) inventory problem as well as to five test functions. Numerical results show that the proposed algorithm leads to better solutions with less computational time than the corresponding metamodel-based algorithm.  相似文献   

16.
Unconditional exact tests are increasingly used in practice for categorical data to increase the power of a study and to make the data analysis approach being consistent with the study design. In a two-arm study with a binary endpoint, p-value based on the exact unconditional Barnard test is computed by maximizing the tail probability over a nuisance parameter with a range from 0 to 1. The traditional grid search method is able to find an approximate maximum with a partition of the parameter space, but it is not accurate and this approach becomes computationally intensive for a study beyond two groups. We propose using a polynomial method to rewrite the tail probability as a polynomial. The solutions from the derivative of the polynomial contain the solution for the global maximum of the tail probability. We use an example from a double-blind randomized Phase II cancer clinical trial to illustrate the application of the proposed polynomial method to achieve an accurate p-value. We also compare the performance of the proposed method and the traditional grid search method under various conditions. We would recommend using this new polynomial method in computing accurate exact unconditional p-values.  相似文献   

17.
It is well known that Yates' algorithm can be used to estimate the effects in a factorial design. We develop a modification of this algorithm and call it modified Yates' algorithm and its inverse. We show that the intermediate steps in our algorithm have a direct interpretation as estimated level-specific mean values and effects. Also we show how Yates' or our modified algorithm can be used to construct the blocks in a 2 k factorial design and to generate the layout sheet of a 2 k−p fractional factorial design and the confounding pattern in such a design. In a final example we put together all these methods by generating and analysing a 26-2 design with 2 blocks.  相似文献   

18.
Let X =(x)ij=(111, …, X,)T, i = l, …n, be an n X random matrix having multivariate symmetrical distributions with parameters μ, Σ. The p-variate normal with mean μ and covariance matrix is a member of this family. Let be the squared multiple correlation coefficient between the first and the succeeding p1 components, and let p2 = + be the squared multiple correlation coefficient between the first and the remaining p1 + p2 =p – 1 components of the p-variate normal vector. We shall consider here three testing problems for multivariate symmetrical distributions. They are (A) to test p2 =0 against; (B) to test against =0, 0; (C) to test against p2 =0, We have shown here that for problem (A) the uniformly most powerful invariant (UMPI) and locally minimax test for the multivariate normal is UMPI and is locally minimax as p2 0 for multivariate symmetrical distributions. For problem (B) the UMPI and locally minimax test is UMPI and locally minimax as for multivariate symmetrical distributions. For problem (C) the locally best invariant (LBI) and locally minimax test for the multivariate normal is also LBI and is locally minimax as for multivariate symmetrical distributions.  相似文献   

19.
The problem of computing the variance of a sample of N data points {xi } may be difficult for certain data sets, particularly when N is large and the variance is small. We present a survey of possible algorithms and their round-off error bounds, including some new analysis for computations with shifted data. Experimental results confirm these bounds and illustrate the dangers of some algorithms. Specific recommendations are made as to which algorithm should be used in various contexts.  相似文献   

20.
We study the problem of testing: H0 : μ ∈ P against H1 : μ ? P, based on a random sample of N observations from a p-dimensional normal distribution Np(μ, Σ) with Σ > 0 and P a closed convex positively homogeneous set. We develop the likelihood-ratio test (LRT) for this problem. We show that the union-intersection principle leads to a test equivalent to the LRT. It also gives a large class of tests which are shown to be admissible by Stein's theorem (1956). Finally, we give the α-level cutoff points for the LRT.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号