首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It has often been complained that the standard framework of decision theory is insufficient. In most applications, neither the maximin paradigm (relving on complete ignorance on the states of natures) nor the classical Bayesian paradigm (assuming perfect probabilistic information on the states of nature) reflect the situation under consideration adequately. Typically one possesses some, but incomplete, knowledge on the stochastic behaviour of the states of nature. In this paper first steps towards a comprehensive framework for decision making under such complex uncertainty will be provided. Common expected utility theory will be extended to interval probability, a generalized probabilistic setting which has the power to express incomplete stochastic knowledge and to take the extent of ambiguity (non-stochastic uncertainty) into account. Since two-monotone and totally monotone capacities are special cases of general interval probatility, wher Choquet integral and interval-valued expectation correspond to one another, the results also show, as a welcome by-product, how to deal efficiently with Choquet Expected Utility and how to perform a neat decision analysis in the case of belief functions. Received: March 2000; revised version: July 2001  相似文献   

2.
This paper is devoted to application of the Choquet integral with respect to the monotone set functions in economics. We present the application in decision making, finance, insurance, social welfare and quality of life. The Choquet integral is used as the numerical representation of preference relation in decision making, as the “expected value” of future price in financial decision problems, as the insurance premium and as the social evaluation function. Received: March 2000; revised version: August 2001  相似文献   

3.
Summary We extend to masses on a real interval the notion of ϕ-mean, usually considered in the context of σ-additive probabilities or probability distribution functions, and consider some axiomatic treatments of it at different levels of masses (simple masses, compact support masses, tight masses, arbitrary masses). Moreover, as an important special case, we get axiomatic systems for general means, as well. We also prove that the usual axiomatic system “Consistency with Certainty+Associativity+Monotonicity” characterizes the ϕ-mean of masses with arbitrary compact support and that, already at tight masses level, this system is not adequate. We note that the analytical tool used to define the ϕ-mean is the Choquet integral. Work performed under the auspices of the National Group: “Inferenza Statistica: basi probabilistiche e sviluppi metodologici” (MURST 40%).  相似文献   

4.
Robust Statistics considers the quality of statistical decisions in the presence of deviations from the ideal model, where deviations are modelled by neighborhoods of a certain size about the ideal model. We introduce a new concept of optimality (radius-minimaxity) if this size or radius is not precisely known: for this notion, we determine the increase of the maximum risk over the minimax risk in the case that the optimally robust estimator for the false neighborhood radius is used. The maximum increase of the relative risk is minimized in the case that the radius is known only to belong to some interval [r l ,r u ]. We pursue this minmax approach for a number of ideal models and a variety of neighborhoods. Also, the effect of increasing parameter dimension is studied for these models. The minimax increase of relative risk in case the radius is completely unknown, compared with that of the most robust procedure, is 18.1% versus 57.1% and 50.5% versus 172.1% for one-dimensional location and scale, respectively, and less than 1/3 in other typical contamination models. In most models considered so far, the radius needs to be specified only up to a factor , in order to keep the increase of relative risk below 12.5%, provided that the radius–minimax robust estimator is employed. The least favorable radii leading to the radius–minimax estimators turn out small: 5–6% contamination, at sample size 100.   相似文献   

5.
Most economists consider that the cases of negative information value that non-Bayesian decision makers seem to exhibit, clearly show that these models are not models representing rational behaviour. We consider this issue for Choquet Expected Utility maximizers in a simple framework, that is the problem of choosing on which event to bet. First, we find a necessary condition to prevent negative information vlaue that we call Separative Monotonicity. This is a weaker condition than Savage Sure thing Principle and it appears that necessity and possibility measures satisfy it and that we cand find conditioning rules such that the information value is always positive. In a second part, we question the way information value is usually measured and suggest that negative information values are merely resulting from an inadequate formula. Yet, we suggest to impose what appears as a weaker requirement, that is, the betting strategy should not be Statistically Dominated. We show for classical updating rules applied to belief functions that this requirement is violated. We consider a class of conditioning rules and exhibit a necessary and sufficient condition in order to satisfy the Statistical Dominance criterion in the case of belief functions. Received: November 2000; revised version: July 2001  相似文献   

6.
By representing fair betting odds according to one or more pairs of confidence set estimators, dual parameter distributions called confidence posteriors secure the coherence of actions without any prior distribution. This theory reduces to the maximization of expected utility when the pair of posteriors is induced by an exact or approximate confidence set estimator or when a reduction rule is applied to the pair. Unlike the p-value, the confidence posterior probability of an interval hypothesis is suitable as an estimator of the indicator of hypothesis truth since it converges to 1 if the hypothesis is true or to 0 otherwise.  相似文献   

7.
We consider the problem of estimating the proportion θ of true null hypotheses in a multiple testing context. The setup is classically modelled through a semiparametric mixture with two components: a uniform distribution on interval [0,1] with prior probability θ and a non‐parametric density f . We discuss asymptotic efficiency results and establish that two different cases occur whether f vanishes on a non‐empty interval or not. In the first case, we exhibit estimators converging at a parametric rate, compute the optimal asymptotic variance and conjecture that no estimator is asymptotically efficient (i.e. attains the optimal asymptotic variance). In the second case, we prove that the quadratic risk of any estimator does not converge at a parametric rate. We illustrate those results on simulated data.  相似文献   

8.
This paper considers the problem of optimal design for inference in Generalized Linear Models, when prior information about the parameters is available. The general theory of optimum design usually requires knowledge of the parameter values. These are usually unknown and optimal design can, therefore, not be used in practice. However, one way to circumvent this problem is through so-called “optimal design in average”, or shortly, “ave optimal”. The ave optimal design is chosen to minimize the expected value of some criterion function over a prior distribution. We focus our interest on the aveD A-optimality, including aveD- and avec-optimality and show the appropriate equivalence theorems for these optimality criterions, which give necessary conditions for an optimal design. Ave optimal designs are of interest when e.g. a factorial experiment with a binary or a Poisson response in to be conducted. The results are applied to factorial experiments, including a control group experiment and a 2×2 experiment.  相似文献   

9.
10.
A bandit problem with infinitely many Bernoulli arms is considered. The parameters of Bernoulli arms are independent and identically distributed random variables from a generalized beta distributionG3B(a, b, λ) witha, b>0 and 0<λ<2. Under the generalized beta prior distributions, we first derive the asymptotic expected failure rates ofk-failure strategies, and then obtain a lower bound for the expected failure rate over all strategies investigated in Berry et al. (1997). The asymptotic expected failure rates for the other three strategies studied in Berry et al. (1997) are also included. Numerical estimations for a variety of generalized beta prior distributions are presented to illustrate the performances of these strategies.  相似文献   

11.
Abstract

In this article, we consider the optimal investment problem for a defined contribution (DC) pension plan with mispricing. We assume that the pension funds are allowed to invest in a risk-free asset, a market index, and a risky asset with mispricing, i.e. the prices are inconsistent in different financial markets. Assuming that the price process of the risky asset follows the Heston model, the manager of the pension fund aims to maximize the expected utility for the power utility function of terminal wealth. By applying stochastic control theory, we establish the corresponding Hamilton-Jacobi-Bellman (HJB) equation. And the optimal investment strategy is obtained for the power utility function explicitly. Finally, numerical examples are provided to analyze effects of parameters on the optimal strategy.  相似文献   

12.
It is well known that a Bayesian credible interval for a parameter of interest is derived from a prior distribution that appropriately describes the prior information. However, it is less well known that there exists a frequentist approach developed by Pratt (1961 Pratt , J. W. ( 1961 ). Length of confidence intervals . J. Amer. Statist. Assoc. 56 : 549657 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) that also utilizes prior information in the construction of frequentist confidence intervals. This frequentist approach produces confidence intervals that have minimum weighted average expected length, averaged according to some weight function that appropriately describes the prior information. We begin with a simple model as a starting point in comparing these two distinct procedures in interval estimation. Consider X 1,…, X n that are independent and identically N(μ, σ2) distributed random variables, where σ2 is known, and the parameter of interest is μ. Suppose also that previous experience with similar data sets and/or specific background and expert opinion suggest that μ = 0. Our aim is to: (a) develop two types of Bayesian 1 ? α credible intervals for μ, derived from an appropriate prior cumulative distribution function F(μ) more importantly; (b) compare these Bayesian 1 ? α credible intervals for μ to the frequentist 1 ? α confidence interval for μ derived from Pratt's frequentist approach, in which the weight function corresponds to the prior cumulative distribution function F(μ). We show that the endpoints of the Bayesian 1 ? α credible intervals for μ are very different to the endpoints of the frequentist 1 ? α confidence interval for μ, when the prior information strongly suggests that μ = 0 and the data supports the uncertain prior information about μ. In addition, we assess the performance of these intervals by analyzing their coverage probability properties and expected lengths.  相似文献   

13.
We derive approximating formulas for the mean and the variance of an autocorrelation estimator which are of practical use over the entire range of the autocorrelation coefficient ρ. The least-squares estimator ∑ n −1 i =1ε i ε i +1 / ∑ n −1 i =1ε2 i is studied for a stationary AR(1) process with known mean. We use the second order Taylor expansion of a ratio, and employ the arithmetic-geometric series instead of replacing partial Cesàro sums. In case of the mean we derive Marriott and Pope's (1954) formula, with (n− 1)−1 instead of (n)−1, and an additional term α (n− 1)−2. This new formula produces the expected decline to zero negative bias as ρ approaches unity. In case of the variance Bartlett's (1946) formula results, with (n− 1)−1 instead of (n)−1. The theoretical expressions are corroborated with a simulation experiment. A comparison shows that our formula for the mean is more accurate than the higher-order approximation of White (1961), for |ρ| > 0.88 and n≥ 20. In principal, the presented method can be used to derive approximating formulas for other estimators and processes. Received: November 30, 1999; revised version: July 3, 2000  相似文献   

14.
A crucial assumption of discrete choice models requires that observed individual behavior is a direct function of unobserved individual utility maximization. There are situations, however, where observed behavior is ambiguous with respect to maximum utility. This is the case, when individual utility maximization is hampered by global restrictions of action. Typically, such restrictions are tied to particular decision alternatives, which causes an asymmetric influencing on individual behavior. The existence of global asymmetric restrictions upon individual behavior can be treated as a second unobserved variable. This leads to two separate models, which have to be estimated simultaneously: a decision model on the one hand and a restriction model on the other. The standard decision model arises as a special case with a zero restriction probability. McKelvey/Zavoina's PseudoR 2 can be employed as a straightforward evaluation of the goodness-of-fit. Neglecting the presence of asymmetric restrictions or considering them as symmetric effects leads to biased estimators. This is discussed in a formal manner and demonstrated by means of a simulation study. The bias may occur in either direction. It is not only restricted to the model parameters themselves, but also to their standard errors. To avoid such bias, it seems advisable to use the extended model if ever possible and test for a zero restriction probability. I wish to thank Reinhard Hujer, Jo Grammig, Matthias Lob, Notburga Ott, Reinhold Schnabel and an anonymous referee for helpful comments on earlier drafts of this paper.  相似文献   

15.
16.
A Bayesian analysis is provided for the Wilcoxon signed-rank statistic (T+). The Bayesian analysis is based on a sign-bias parameter φ on the (0, 1) interval. For the case of a uniform prior probability distribution for φ and for small sample sizes (i.e., 6 ? n ? 25), values for the statistic T+ are computed that enable probabilistic statements about φ. For larger sample sizes, approximations are provided for the asymptotic likelihood function P(T+|φ) as well as for the posterior distribution P(φ|T+). Power analyses are examined both for properly specified Gaussian sampling and for misspecified non Gaussian models. The new Bayesian metric has high power efficiency in the range of 0.9–1 relative to a standard t test when there is Gaussian sampling. But if the sampling is from an unknown and misspecified distribution, then the new statistic still has high power; in some cases, the power can be higher than the t test (especially for probability mixtures and heavy-tailed distributions). The new Bayesian analysis is thus a useful and robust method for applications where the usual parametric assumptions are questionable. These properties further enable a way to do a generic Bayesian analysis for many non Gaussian distributions that currently lack a formal Bayesian model.  相似文献   

17.
When one wants to check a tentatively proposed model for departures that are not well specified, looking at residuals is the most common diagnostic technique. Here, we investigate the use of Bayesian standardized residuals to detect unknown hierarchical structure. Asymptotic theory, also supported by simulations, shows that the use of Bayesian standardized residuals is effective when the within group correlation, ρ, is large. However, we show that standardized residuals may not detect hierarchical structure when ρ is small. Thus, if it is important to detect modest hierarchical structure (i.e., ρ small) one should use other diagnostic techniques in addition to the standardized residuals. We use “quality of care” data from the Patterns of Care Study, a two-stage cluster sample of patients undergoing radiation therapy for cervix cancer, to illustrate the potential use of these residuals to detect missing hierarchical structure.  相似文献   

18.
Empirical Bayes estimation is considered for an i.i.d. sequence of binomial parameters θi arising from an unknown prior distribution G(.). This problem typically arises in industrial sampling, where samples from lots are routinely used to estimate the lot fraction defective of each lot. Two related issues are explored. The first concerns the fact that only the first few moments of G are typically estimable from the data. This suggests consideration of the interval of estimates (e.g., posterior means) corresponding to the different possible G with the specified moments. Such intervals can be obtained by application of well-known moment theory. The second development concerns the need to acknowledge the uncertainty in the estimation of the first few moments of G. Our proposal is to determine a credible set for the moments, and then find the range of estimates (e.g., posterior means) corresponding to the different possible G with moments in the credible set.  相似文献   

19.
A novel framework is proposed for the estimation of multiple sinusoids from irregularly sampled time series. This spectral analysis problem is addressed as an under-determined inverse problem, where the spectrum is discretized on an arbitrarily thin frequency grid. As we focus on line spectra estimation, the solution must be sparse, i.e. the amplitude of the spectrum must be zero almost everywhere. Such prior information is taken into account within the Bayesian framework. Two models are used to account for the prior sparseness of the solution, namely a Laplace prior and a Bernoulli–Gaussian prior, associated to optimization and stochastic sampling algorithms, respectively. Such approaches are efficient alternatives to usual sequential prewhitening methods, especially in case of strong sampling aliases perturbating the Fourier spectrum. Both methods should be intensively tested on real data sets by physicists.  相似文献   

20.
The aim of the paper is to give a coherent account of the robustness approach based on shrinking neighborhoods in the case of i.i.d. observations, and add some theoretical complements. An important aspect of the approach is that it does not require any particular model structure but covers arbitrary parametric models if only smoothly parametrized. In the meantime, equal generality has been achieved by object-oriented implementation of the optimally robust estimators. Exponential families constitute the main examples in this article. Not pretending a complete data analysis, we evaluate the robust estimates on real datasets from literature by means of our R packages ROptEst and RobLox.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号