首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An extension to the class of conventional numerical probability models for nondeterministic phenomena has been identified by Dempster and Shafer in the class of belief functions. We were originally stimulated by this work, but have since come to believe that the bewildering diversity of uncertainty and chance phenomena cannot be encompassed within either the conventional theory of probability, its relatively minor modifications (e.g., not requiring countable additivity), or the theory of belief functions. In consequence, we have been examining the properties of, and prospects for, the generalization of belief functions that is known as upper and lower, or interval-valued, probability. After commenting on what we deem to be problematic elements of common personalist/subjectivist/Bayesian positions that employ either finitely or countably additive probability to represent strength of belief and that are intended to be normative for rational behavior, we sketch some of the ways in which the set of lower envelopes, a subset of the set of lower probabilities that contains the belief functions, enables us to preserve the core of Bayesian reasoning while admitting a more realistic (e.g., in its reduced insistence upon an underlying precision in our beliefs) class of probability-like models. Particular advantages of lower envelopes are identified in the area of the aggregation of beliefs.

The focus of our own research is in the area of objective probabilistic reasoning about time series generated by physical or other empirical (e.g., societal) processes. As it is not the province of a general mathematical methodology such as probability theory to a priori rule out of existence empirical phenomena, we are concerned by the contraint imposed by conventional probability theory that an empirical process of bounded random variables that is believed to have a time- invariant generating mechanism must then exhibot long-run stable time averages. We have shown that lower probability models that allow for unstable time averages can only lie in the class of undominated lower probabilities, a subset of lower probability models disjoint from the lower envelopes and having the weakest relationship to conventional probability measures. Our research has been devoted to exploring and developing the theory of undominated lower probabilities so that it can be applied to model and understand nondeterministic phenomena, and we have also been interested in identifying actual physical processes (e.g., flicker noises) that exhibit behavior requiring such novel models.  相似文献   


2.
We analyze the survival time of a general duplex system sustained by a cold standby unit subjected to a priority rule. The analysis is based on advanced complex function theory (sectionally holomorphic functions). As an example, we consider Weibull–Gnedenko and Erlang distributions for failure and repair. Several graphs are displaying the survival function.  相似文献   

3.
In this paper we investigate the application of stochastic complexity theory to classification problems. In particular, we define the notion of admissible models as a function of problem complexity, the number of data pointsN, and prior belief. This allows us to derive general bounds relating classifier complexity with data-dependent parameters such as sample size, class entropy and the optimal Bayes error rate. We discuss the application of these results to a variety of problems, including decision tree classifiers, Markov models for image segmentation, and feedforward multilayer neural network classifiers.  相似文献   

4.
Most economists consider that the cases of negative information value that non-Bayesian decision makers seem to exhibit, clearly show that these models are not models representing rational behaviour. We consider this issue for Choquet Expected Utility maximizers in a simple framework, that is the problem of choosing on which event to bet. First, we find a necessary condition to prevent negative information vlaue that we call Separative Monotonicity. This is a weaker condition than Savage Sure thing Principle and it appears that necessity and possibility measures satisfy it and that we cand find conditioning rules such that the information value is always positive. In a second part, we question the way information value is usually measured and suggest that negative information values are merely resulting from an inadequate formula. Yet, we suggest to impose what appears as a weaker requirement, that is, the betting strategy should not be Statistically Dominated. We show for classical updating rules applied to belief functions that this requirement is violated. We consider a class of conditioning rules and exhibit a necessary and sufficient condition in order to satisfy the Statistical Dominance criterion in the case of belief functions. Received: November 2000; revised version: July 2001  相似文献   

5.
《随机性模型》2013,29(2):215-245
In this paper, we study the problem of European Option Pricing in a market with short-selling constraints and transaction costs having a very general form. We consider two types of proportional costs and a strictly positive fixed cost. We study the problem within the framework of the theory of stochastic impulse control. We show that determining the price of a European option involves calculating the value functions of two stochastic impulse control problems. We obtain explicit expressions for the quasi-variational inequalities satisfied by the value functions and derive the solution in the case where the parameters of the price processes are constants and the investor's utility function is linear. We use this result to obtain a price for a call option on the stock and prove that this price is a nontrivial lower bound on the hedging price of the call option in the presence of general transaction costs and short-selling constraints. We then consider the situation where the investor's utility function has a general form and characterize the value function as the pointwise limit of an increasing sequence of solutions to associated optimal stopping problems. We thereby devise a numerical procedure to calculate the option price in this general setting and implement the procedure to calculate the option price for the class of exponential utility functions. Finally, we carry out a qualitative investigation of the option prices for exponential and linear-power utility functions.  相似文献   

6.
Regression Kink With an Unknown Threshold   总被引:1,自引:0,他引:1  
This article explores estimation and inference in a regression kink model with an unknown threshold. A regression kink model (or continuous threshold model) is a threshold regression constrained to be everywhere continuous with a kink at an unknown threshold. We present methods for estimation, to test for the presence of the threshold, for inference on the regression parameters, and for inference on the regression function. A novel finding is that inference on the regression function is nonstandard since the regression function is a nondifferentiable function of the parameters. We apply recently developed methods for inference on nondifferentiable functions. The theory is illustrated by an application to the growth and debt problem introduced by Reinhart and Rogoff, using their long-span time-series for the United States.  相似文献   

7.
Empirical likelihood (EL) is an important nonparametric statistical methodology. We develop a package in R called el.convex to implement EL for inference about a multivariate mean. This package contains five functions which use different optimization algorithms but meanwhile seek the same goal. These functions are based on the theory of convex optimization; they are Newton, Davidon–Fletcher–Powell, Broyden–Fletcher–Goldfarb–Shanno, conjugate gradient method, and damped Newton, respectively. We also compare them with the function el.test in the existing R package emplik, and discuss their relative advantages and disadvantages.  相似文献   

8.
This paper describes methods for extracting belief functions from data and incorporating expert opinions. The techniques are applied to a medical domain involving the diagnosis of diferent types of liver diseases. This is a domain in which experts are poor at predicting precise (singleton) outcomes. The methodology developed is shown to perform considerably better than the experts alone. Several methods by which to find belief functions from data are compared, and some methods proposed here are shown to outperform a suggestion by Shafer. The system has been fully implemented on a SEQUENT parallel machine and has a user interface which is simple to use and allows for changes and questions.  相似文献   

9.
We consider the identity associated with the expected values of the matrix-valued quadratic forms of two random matrices. As a special case the identity yields the matrix-valued inequality of the Schwarz type. The most important application is for Loewner's theory in the context of the operator-monotone function, and the inequalities based on the concavityconvexity of the matrix-valued functions are derived.  相似文献   

10.
Summary.  We propose an approach for estimating the date of lost confidence of jet engines, which are devices with multiple components subject to disruption. A mixed Weibull distribution is estimated from a large data set subject to censoring at various times. Parametric uncertainty is derived analytically and mapped visually onto the functions of use in reliability theory, including the hazard function. We demonstrate the use of the method on a database of disruption times for components in 325 jet engines.  相似文献   

11.
We present a new approach for measuring the degree of exchangeability of two continuous, identically distributed random variables or, equivalently, the degree of symmetry of their corresponding copula. While the opposite of exchangeability does not exist in probability theory, the contrary of symmetry is quite obvious from an analytical point of view. Therefore, leaving the framework of probability theory, we introduce a natural measure of symmetry for bivariate functions in an arbitrary normed function space. Restricted to the set of copulas this yields a general concept for measures of (non-)exchangeability of random variables. The fact that copulas are never antisymmetric leads to the notion of maximal degree of antisymmetry of copulas. We illustrate our approach by various norms on function spaces, most notably the Sobolev norm for copulas.  相似文献   

12.
We consider an efficient Bayesian approach to estimating integration-based posterior summaries from a separate Bayesian application. In Bayesian quadrature we model an intractable posterior density function f(·) as a Gaussian process, using an approximating function g(·), and find a posterior distribution for the integral of f(·), conditional on a few evaluations of f (·) at selected design points. Bayesian quadrature using normal g (·) is called Bayes-Hermite quadrature. We extend this theory by allowing g(·) to be chosen from two wider classes of functions. One is a family of skew densities and the other is the family of finite mixtures of normal densities. For the family of skew densities we describe an iterative updating procedure to select the most suitable approximation and apply the method to two simulated posterior density functions.  相似文献   

13.
We discuss the general form of a first-order correction to the maximum likelihood estimator which is expressed in terms of the gradient of a function, which could for example be the logarithm of a prior density function. In terms of Kullback–Leibler divergence, the correction gives an asymptotic improvement over maximum likelihood under rather general conditions. The theory is illustrated for Bayes estimators with conjugate priors. The optimal choice of hyper-parameter to improve the maximum likelihood estimator is discussed. The results based on Kullback–Leibler risk are extended to a wide class of risk functions.  相似文献   

14.
It has often been complained that the standard framework of decision theory is insufficient. In most applications, neither the maximin paradigm (relving on complete ignorance on the states of natures) nor the classical Bayesian paradigm (assuming perfect probabilistic information on the states of nature) reflect the situation under consideration adequately. Typically one possesses some, but incomplete, knowledge on the stochastic behaviour of the states of nature. In this paper first steps towards a comprehensive framework for decision making under such complex uncertainty will be provided. Common expected utility theory will be extended to interval probability, a generalized probabilistic setting which has the power to express incomplete stochastic knowledge and to take the extent of ambiguity (non-stochastic uncertainty) into account. Since two-monotone and totally monotone capacities are special cases of general interval probatility, wher Choquet integral and interval-valued expectation correspond to one another, the results also show, as a welcome by-product, how to deal efficiently with Choquet Expected Utility and how to perform a neat decision analysis in the case of belief functions. Received: March 2000; revised version: July 2001  相似文献   

15.
Summary.  Deconvolution problems are naturally represented in the Fourier domain, whereas thresholding in wavelet bases is known to have broad adaptivity properties. We study a method which combines both fast Fourier and fast wavelet transforms and can recover a blurred function observed in white noise with O { n    log ( n )2} steps. In the periodic setting, the method applies to most deconvolution problems, including certain 'boxcar' kernels, which are important as a model of motion blur, but having poor Fourier characteristics. Asymptotic theory informs the choice of tuning parameters and yields adaptivity properties for the method over a wide class of measures of error and classes of function. The method is tested on simulated light detection and ranging data suggested by underwater remote sensing. Both visual and numerical results show an improvement over competing approaches. Finally, the theory behind our estimation paradigm gives a complete characterization of the 'maxiset' of the method: the set of functions where the method attains a near optimal rate of convergence for a variety of L p loss functions.  相似文献   

16.
The study of count data time series has been active in the past decade, mainly in theory and model construction. There are different ways to construct time series models with a geometric autocorrelation function, and a given univariate margin such as negative binomial. In this paper, we investigate negative binomial time series models based on the binomial thinning and two other expectation thinning operators, and show how they differ in conditional variance or heteroscedasticity. Since the model construction is in terms of probability generating functions, typically, the relevant conditional probability mass functions do not have explicit forms. In order to do simulations, likelihood inference, graphical diagnostics and prediction, we use a numerical method for inversion of characteristic functions. We illustrate the numerical methods and compare the various negative binomial time series models for a real data example.  相似文献   

17.
We construct a univariate exponential dispersion model comprised of discrete infinitely divisible distributions. This model emerges in the theory of branching processes. We obtain a representation for the Lévy measure of relevant distributions and characterize their laws as Poisson mixtures and/or compound Poisson distributions. The regularity of the unit variance function of this model is employed for the derivation of approximations by the Poisson-exponential model. We emphasize the role of the latter class. We construct local approximations relating them to properties of special functions and branching diffusions.  相似文献   

18.
The graphical belief model is a versatile tool for modeling complex systems. The graphical structure and its implicit probabilistic and logical independence conditions define the relationships between many of the variables of the problem. The graphical model is composed of a collection of local models:models of both interactions between the variables sharing a common hyperedge and information about single variables. These local models can be constructed with either probability distributions or belief functions. This paper takes the latter approach and describes simple models for univariate and multivariate belief functions. The examples are taken from both reliability and knowledge representation problems.  相似文献   

19.
In the field of optimal transport theory, an optimal map is known to be a gradient map of a potential function satisfying cost-convexity. In this article, the Jacobian determinant of a gradient map is shown to be log-concave with respect to a convex combination of the potential functions when the underlying manifold is the sphere and the cost function is the distance squared.

As an application to statistics, a new family of probability densities on the sphere is defined in terms of cost-convex functions. The log-concave property of the likelihood function follows from the inequality.  相似文献   

20.
Abstract.  Imagine we have two different samples and are interested in doing semi- or non-parametric regression analysis in each of them, possibly on the same model. In this paper, we consider the problem of testing whether a specific covariate has different impacts on the regression curve in these two samples. We compare the regression curves of different samples but are interested in specific differences instead of testing for equality of the whole regression function. Our procedure does allow for random designs, different sample sizes, different variance functions, different sets of regressors with different impact functions, etc. As we use the marginal integration approach, this method can be applied to any strong, weak or latent separable model as well as to additive interaction models to compare the lower dimensional separable components between the different samples. Thus, in the case of having separable models, our procedure includes the possibility of comparing the whole regression curves, thereby avoiding the curse of dimensionality. It is shown that bootstrap fails in theory and practice. Therefore, we propose a subsampling procedure with automatic choice of subsample size. We present a complete asymptotic theory and an extensive simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号