首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A nonconvex constrained optimization problem is considered in which the constraints are of the form of generalized polynomials. An invexity kernel is established for this class of problem, and a consequent theorem gives sufficient conditions for the solutions of such problems.  相似文献   

2.
Consider a linear regression model with independent normally distributed errors. Suppose that the scalar parameter of interest is a specified linear combination of the components of the regression parameter vector. Also suppose that we have uncertain prior information that a parameter vector, consisting of specified distinct linear combinations of these components, takes a given value. Part of our evaluation of a frequentist confidence interval for the parameter of interest is the scaled expected length, defined to be the expected length of this confidence interval divided by the expected length of the standard confidence interval for this parameter, with the same confidence coefficient. We say that a confidence interval for the parameter of interest utilizes this uncertain prior information if (a) the scaled expected length of this interval is substantially less than one when the prior information is correct, (b) the maximum value of the scaled expected length is not too large and (c) this confidence interval reverts to the standard confidence interval, with the same confidence coefficient, when the data happen to strongly contradict the prior information. We present a new confidence interval for a scalar parameter of interest, with specified confidence coefficient, that utilizes this uncertain prior information. A factorial experiment with one replicate is used to illustrate the application of this new confidence interval.  相似文献   

3.
The problem of improving upon the usual set estimator of a multivariate normal mean has only recently seen significant advances. Improved sets that take advantage of the Stein effect have been constructed. It is shown here that the Stein effect is so powerful that one can construct improved confidence sets that can have zero radius on a set of positive probability. Other, somewhat more sensible, sets which attain arbitrarily small radius are also constructed, and it is argued that one way to eliminate unreasonable confidence sets is through a conditional evaluation.  相似文献   

4.
For given (small) a and β a sequential confidence set that covers the true parameter point with probability at least 1 - a and one or more specified false parameter points with probability at most β can be generated by a family of sequen-tial tests. Several situations are described where this approach would be a natural one. The following example is studied in some detail: obtain an upper (1 - α)-confidence interval for a normal mean μ (variance known) with β-protection at μ - δ(μ), where δ(.) is not bounded away from 0 so that a truly sequential procedure is mandatory. Some numerical results are presented for intervals generated by (1) sequential probability ratio tests (SPRT's), and (2) generalized sequential probability ratio tests (GSPRT's). These results indicate the superiority of the GSPRT-generated intervals over the SPRT-generated ones if expected sample size is taken as performance criterion  相似文献   

5.
Consider that we have a collection of k populations π1, π2…,πk. The quality of the ith population is characterized by a real parameter θi and the population is to be designated as superior or inferior depending on how much the θi differs from θmax = max{θ1, θ2,…,θk}. From the set {π1, π2,…,πk}, we wish to select the subset of superior populations. In this paper we devise rules of selection which have the property that their selected set excludes all the inferior populations with probability at least 1?α, where a is a specified number.  相似文献   

6.
In this paper, we develop the non-informative priors for the inverse Weibull model when the parameters of interest are the scale and the shape parameters. We develop the first-order and the second-order matching priors for both parameters. For the scale parameter, we reveal that the second-order matching prior is not a highest posterior density (HPD) matching prior, does not match the alternative coverage probabilities up to the second order and is not a cumulative distribution function (CDF) matching prior. Also for the shape parameter, we reveal that the second-order matching prior is an HPD matching prior and a CDF matching prior and also matches the alternative coverage probabilities up to the second order. For both parameters, we reveal that the one-at-a-time reference prior is the second-order matching prior, but Jeffreys’ prior is not the first-order and the second-order matching prior. A simulation study is performed to compare the target coverage probabilities and a real example is given.  相似文献   

7.
Confidence interval construction the difference in mean event rates for two Index independent , Poisson samples is discussed. Intervals are derived by considering Bayes estimates of the mean event rates using a family of noninformative priors. The coverage probabilities of the proposed are compared to those of the standard Wald interval for of observed events. A compromise method of constructing interval based on the data is suggested and its properties are evaluated. The method is illustrated in several examples.  相似文献   

8.
We develop an easy and direct way to define and compute the fiducial distribution of a real parameter for both continuous and discrete exponential families. Furthermore, such a distribution satisfies the requirements to be considered a confidence distribution. Many examples are provided for models, which, although very simple, are widely used in applications. A characterization of the families for which the fiducial distribution coincides with a Bayesian posterior is given, and the strict connection with Jeffreys prior is shown. Asymptotic expansions of fiducial distributions are obtained without any further assumptions, and again, the relationship with the objective Bayesian analysis is pointed out. Finally, using the Edgeworth expansions, we compare the coverage of the fiducial intervals with that of other common intervals, proving the good behaviour of the former.  相似文献   

9.
The basis for this paper is in the following observation: for a given “ intractable” optimization problem for which no efficient solution technique exists, if we can devise a systematic procedure for generating independent, heuristic solutions, we should be able to apply statistical extreme-value theory in order to obtain point estimates for the globally optimal solution. This observation has been mechanized in order to evaluate heuristic solutions and assess deviations from optimality, the strategy developed is applicable to a host of combinatorial problems. The assumptions of our model, along with computational experience are discussed.  相似文献   

10.
Many seemingly different problems in machine learning, artificial intelligence, and symbolic processing can be viewed as requiring the discovery of a computer program that produces some desired output for particular inputs. When viewed in this way, the process of solving these problems becomes equivalent to searching a space of possible computer programs for a highly fit individual computer program. The recently developed genetic programming paradigm described herein provides a way to search the space of possible computer programs for a highly fit individual computer program to solve (or approximately solve) a surprising variety of different problems from different fields. In genetic programming, populations of computer programs are genetically bred using the Darwinian principle of survival of the fittest and using a genetic crossover (sexual recombination) operator appropriate for genetically mating computer programs. Genetic programming is illustrated via an example of machine learning of the Boolean 11-multiplexer function and symbolic regression of the econometric exchange equation from noisy empirical data.Hierarchical automatic function definition enables genetic programming to define potentially useful functions automatically and dynamically during a run, much as a human programmer writing a complex computer program creates subroutines (procedures, functions) to perform groups of steps which must be performed with different instantiations of the dummy variables (formal parameters) in more than one place in the main program. Hierarchical automatic function definition is illustrated via the machine learning of the Boolean 11-parity function.  相似文献   

11.
We consider three interval estimators for linear functions of Poisson rates: a Wald interval, a t interval with Satterthwaite's degrees of freedom, and a Bayes interval using noninformative priors. The differences in these intervals are illustrated using data from the Crash Records Bureau of the Texas Department of Public Safety. We then investigate the relative performance of these intervals via a simulation study. This study demonstrates that the Wald interval performs poorly when expected counts are less than 5, while the interval based on the noninformative prior performs best. It also shows that the Bayes interval and the interval based on the t distribution perform comparably well for more moderate expected counts.  相似文献   

12.
In this paper a three stage method to obtain a confidence sets of given form and size, with a preassigned confidence coefficient, is suggested for the case of normally distributed observations from populations with different means and common unknown variance. The method is quite general, and it allows for most common forms of confidence sets. Asymptotic properties of the random sample sizes of the method are studied and compared to the same properties of other methods with the same aim.  相似文献   

13.
Statistical calibration or inverse prediction involves data collected in two stages. In the first stage, several values of an endogenous variable are observed, each corresponding to a known value of an exogenous variable; in the second stage, one or more values of the endogenous variable are observed which correspond to an unknown value of the exogenous variable. When estimating the value of the latter, it has been suggested that the variability about the regression relationship should not be assumed to be equal for the two stages of data collection. In this paper, the authors present a Bayesian method of analysis based on noninformative priors that takes this heteroscedasticity into account.  相似文献   

14.
This paper considers confidence intervals for the difference of two binomial proportions. Some currently used approaches are discussed. A new approach is proposed. Under several generally used criteria, these approaches are thoroughly compared. The widely used Wald confidence interval (CI) is far from satisfactory, while the Newcombe's CI, new recentered CI and score CI have very good performance. Recommendations for which approach is applicable under different situations are given.  相似文献   

15.
This paper presents algorithms for computing confidence intervals and regions for elements of a parameter vector when the signs of linear combinations of unknown parameters are observed, but the coefficients contain experimental error. These methods were proposed in the geochemical literature by Kolassa (1992) as a method specific to petrology. Experimental data are used to give linear constraints, involving quantities measured with error, on unknown free energies and entropies of a chemical reaction. Confidence intervals are given for these parameters, and these are compared with more naïve approaches.  相似文献   

16.
The construction of confidence sets for the parameters of a flexible simple linear regression model for interval-valued random sets is addressed. For that purpose, the asymptotic distribution of the least-squares estimators is analyzed. A simulation study is conducted to investigate the performance of those confidence sets. In particular, the empirical coverages are examined for various interval linear models. The applicability of the procedure is illustrated by means of a real-life case study.  相似文献   

17.
A mixed-integer programing formulation for clustering is proposed, one that encompasses a wider range of objectives--and side conditions--than standard clustering approaches. The flexibility of the formulation is demonstrated in diagrams of sample problems and solutions. Preliminary computational tests in a practical setting confirm the usefulness of the formulation.  相似文献   

18.
19.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1.

It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width.  相似文献   

20.
This article studies the construction of a Bayesian confidence interval for the ratio of marginal probabilities in matched-pair designs. Under a Dirichlet prior distribution, the exact posterior distribution of the ratio is derived. The tail confidence interval and the highest posterior density (HPD) interval are studied, and their frequentist performances are investigated by simulation in terms of mean coverage probability and mean expected length of the interval. An advantage of Bayesian confidence interval is that it is always well defined for any data structure and has shorter mean expected width. We also find that the Bayesian tail interval at Jeffreys prior performs as well as or better than the frequentist confidence intervals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号