首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Dempster Shafer theory of belief functions is a method of quantifying uncertainty that generalizes probability theory. We review the theory of belief functions in the context of statistical inference. We mainly focus on a particular belief function based on the likelihood function and its application to problems with partial prior information. We also consider connections to upper and lower probabilities and Bayesian robustness.  相似文献   

2.
A new method is proposed for drawing coherent statistical inferences about a real-valued parameter in problems where there is little or no prior information. Prior ignorance about the parameter is modelled by the set of all continuous probability density functions for which the derivative of the log-density is bounded by a positive constant. This set is translation-invariant, it contains density functions with a wide variety of shapes and tail behaviour, and it generates prior probabilities that are highly imprecise. Statistical inferences can be calculated by solving a simple type of optimal control problem whose general solution is characterized. Detailed results are given for the problems of calculating posterior upper and lower means, variances, distribution functions and probabilities of intervals. In general, posterior upper and lower expectations are achieved by prior density functions that are piecewise exponential. The results are illustrated by normal and binomial examples  相似文献   

3.
We present a mathematical theory of objective, frequentist chance phenomena that uses as a model a set of probability measures. In this work, sets of measures are not viewed as a statistical compound hypothesis or as a tool for modeling imprecise subjective behavior. Instead we use sets of measures to model stable (although not stationary in the traditional stochastic sense) physical sources of finite time series data that have highly irregular behavior. Such models give a coarse-grained picture of the phenomena, keeping track of the range of the possible probabilities of the events. We present methods to simulate finite data sequences coming from a source modeled by a set of probability measures, and to estimate the model from finite time series data. The estimation of the set of probability measures is based on the analysis of a set of relative frequencies of events taken along subsequences selected by a collection of rules. In particular, we provide a universal methodology for finding a family of subsequence selection rules that can estimate any set of probability measures with high probability.  相似文献   

4.
This paper is a first step towards generalizing the concept of a Markov decision process to imprecise probabilities. A concept of a generalized Markov decision process is defined and motivated. Finite horizon, fully observable models with total cumulative reward optimality criterion are studied. The imprecision in the model opens up a possibility of indecision. A solution procedure, that generalizes the backward induction method from the classical theory, is developed. This procedure finds all maximal (i.e., undominated) policies for a given generalized Markov decision process. An example illustrating the solution method is given. The directions for further research are discussed.  相似文献   

5.
In finance, inferences about future asset returns are typically quantified with the use of parametric distributions and single-valued probabilities. It is attractive to use less restrictive inferential methods, including nonparametric methods which do not require distributional assumptions about variables, and imprecise probability methods which generalize the classical concept of probability to set-valued quantities. Main attractions include the flexibility of the inferences to adapt to the available data and that the level of imprecision in inferences can reflect the amount of data on which these are based. This paper introduces nonparametric predictive inference (NPI) for stock returns. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. NPI is presented for inference about future stock returns, as a measure for risk and uncertainty, and for pairwise comparison of two stocks based on their future aggregate returns. The proposed NPI methods are illustrated using historical stock market data.  相似文献   

6.
This paper develops alternatives to maximum likelihood estimators (MLE) for logistic regression models and compares the mean squared error (MSE) of the estimators. The MLE for the vector of underlying success probabilities has low MSE only when the true probabilities are extreme (i.e., near 0 or 1). Extreme probabilities correspond to logistic regression parameter vectors which are large in norm. A competing “restricted” MLE and an empirical version of it are suggested as estimators with better performance than the MLE for central probabilities. An approximate EM-algorithm for estimating the restriction is described. As in the case of normal theory ridge estimators, the proposed estimators are shown to be formally derivable by Bayes and empirical Bayes arguments. The small sample operating characteristics of the proposed estimators are compared to the MLE via a simulation study; both the estimation of individual probabilities and of logistic parameters are considered.  相似文献   

7.
Abstract.  A simple and standard approach for analysing multistate model data is to model all transition intensities and then compute a summary measure such as the transition probabilities based on this. This approach is relatively simple to implement but it is difficult to see what the covariate effects are on the scale of interest. In this paper, we consider an alternative approach that directly models the covariate effects on transition probabilities in multistate models. Our new approach is based on binomial modelling and inverse probability of censoring weighting techniques and is very simple to implement by standard software. We show how to do flexible regression models with possibly time-varying covariate effects.  相似文献   

8.
Contingent probabilities and means are ubiquitous in actuarial science. The correct interpretation of contingent probabilities and means as well as the probability theory behind them have been addressed by researchers. In this article, we explore their statistical aspect. We give non-parametric estimators of contingent probabilities and means. Then, we show that our estimators are strongly consistent. Moreover, we give the asymptotic distributions of our estimators. Finally, we provide several examples to demonstrate the applications of these estimators in actuarial science.  相似文献   

9.
Abstract

To improve the empirical performance of the Black-Scholes model, many alternative models have been proposed to address leptokurtic feature, volatility smile, and volatility clustering effects of the asset return distributions. However, analytical tractability remains a problem for most alternative models. In this article, we study a class of hidden Markov models including Markov switching models and stochastic volatility models, that can incorporate leptokurtic feature, volatility clustering effects, as well as provide analytical solutions to option pricing. We show that these models can generate long memory phenomena when the transition probabilities depend on the time scale. We also provide an explicit analytic formula for the arbitrage-free price of the European options under these models. The issues of statistical estimation and errors in option pricing are also discussed in the Markov switching models.  相似文献   

10.
Sample selection and attrition are inherent in a range of treatment evaluation problems such as the estimation of the returns to schooling or training. Conventional estimators tackling selection bias typically rely on restrictive functional form assumptions that are unlikely to hold in reality. This paper shows identification of average and quantile treatment effects in the presence of the double selection problem into (i) a selective subpopulation (e.g., working—selection on unobservables) and (ii) a binary treatment (e.g., training—selection on observables) based on weighting observations by the inverse of a nested propensity score that characterizes either selection probability. Weighting estimators based on parametric propensity score models are applied to female labor market data to estimate the returns to education.  相似文献   

11.
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1–2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

12.
Forecasting Performance of an Open Economy DSGE Model   总被引:1,自引:0,他引:1  
《Econometric Reviews》2007,26(2):289-328
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

13.
The class of Lagrangian probability distributions ‘LPD’, given by the expansion of a probability generating function ft’ under the transformation u = t/gt’ where gt’ is also a p.g.f., has been substantially widened by removing the restriction that the defining functions gt’ and ft’ be probability generating functions. The class of modified power series distributions defined by Gupta ‘1974’ has been shown to be a sub-class of the wider class of LPDs  相似文献   

14.
Abstract

In this paper, we investigate some ruin problems for risk models that contain uncertainties on both claim frequency and claim size distribution. The problems naturally lead to the evaluation of ruin probabilities under the so-called G-expectation framework. We assume that the risk process is described as a class of G-compound Poisson process, a special case of the G-Lévy process. By using the exponential martingale approach, we obtain the upper bounds for the two-sided ruin probability as well as the ruin probability involving investment. Furthermore, we derive the optimal investment strategy under the criterion of minimizing this upper bound. Finally, we conclude that the upper bound in the case with investment is less than or equal to the case without investment.  相似文献   

15.
Abstract.  Typically, regression analysis for multistate models has been based on regression models for the transition intensities. These models lead to highly nonlinear and very complex models for the effects of covariates on state occupation probabilities. We present a technique that models the state occupation or transition probabilities in a multistate model directly. The method is based on the pseudo-values from a jackknife statistic constructed from non-parametric estimators for the probability in question. These pseudo-values are used as outcome variables in a generalized estimating equation to obtain estimates of model parameters. We examine this approach and its properties in detail for two special multistate model probabilities, the cumulative incidence function in competing risks and the current leukaemia-free survival used in bone marrow transplants. The latter is the probability a patient is alive and in either a first or second post-transplant remission. The techniques are illustrated on a dataset of leukaemia patients given a marrow transplant. We also discuss extensions of the model that are of current research interest.  相似文献   

16.
In this paper we develop a non‐conventional statistical test for the change‐point in a mean model by making use of an almost‐sure (a.s.) convergence (or strong convergence) result that we obtain, in respect of the difference between the sums of squared residuals under the null and alternative hypotheses. We prove that both types of error probabilities of the new test converge to zero almost surely when the sample size goes to infinity. This result does not hold for any conventional statistical test where the type I error probability, i.e. the significance level or the size, is prescribed at a low but non‐zero level (e.g. 0.05). The test developed is easy to use in practice, and is ready to be generalised to other change‐point models provided that the relevant almost‐sure convergence results are available. We also provide a simulation study in the paper to compare the new and conventional tests under different data scenarios. The results obtained are consistent with our asymptotic study. In addition we provide least squares estimators of those parameters used in the change‐point test together with their almost‐sure convergence properties.  相似文献   

17.
18.
A novel class of hierarchical nonparametric Bayesian survival regression models for time-to-event data with uninformative right censoring is introduced. The survival curve is modeled as a random function whose prior distribution is defined using the beta-Stacy (BS) process. The prior mean of each survival probability and its prior variance are linked to a standard parametric survival regression model. This nonparametric survival regression can thus be anchored to any reference parametric form, such as a proportional hazards or an accelerated failure time model, allowing substantial departures of the predictive survival probabilities when the reference model is not supported by the data. Also, under this formulation the predictive survival probabilities will be close to the empirical survival distribution near the mode of the reference model and they will be shrunken towards its probability density in the tails of the empirical distribution.  相似文献   

19.
Scientific progress in all empirical sciences relies on selecting models and performing inferences from selected models. Standard statistical properties (e.g., repeated sampling coverage probability of confidence intervals) cannot be guaranteed after a model selection. This viewpoint reviews this dilemma, puts the role that pre‐specification can play into perspective and illustrates model averaging as a way to relax the problem of model selection uncertainty. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Probability forecasting models can be estimated using weighted score functions that (by definition) capture the performance of the estimated probabilities relative to arbitrary “baseline” probability assessments, such as those produced by another model, by a bookmaker or betting market, or by a human probability assessor. Maximum likelihood estimation (MLE) is interpretable as just one such method of optimum score estimation. We find that when MLE-based probabilities are themselves treated as the baseline, forecasting models estimated by optimizing any of the proven families of power and pseudospherical economic score functions yield the very same probabilities as MLE. The finding that probabilities estimated by optimum score estimation respond to MLE-baseline probabilities by mimicking them supports reliance on MLE as the default form of optimum score estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号