首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到14条相似文献,搜索用时 0 毫秒
1.
ABSTRACT

Dependence among defaults both across assets and over time is an important characteristic of financial risk. A Bayesian approach to default rate estimation is proposed and illustrated using prior distributions assessed from an experienced industry expert. Two extensions of the binomial model are proposed. The first allows correlated defaults yet remains consistent with Basel II’s asymptotic single-factor model. The second adds temporal correlation in default rates through autocorrelation in the systemic factor. Implications for the predictability of default rates are considered. The single-factor model generates more forecast uncertainty than does the parameter uncertainty. A robustness exercise illustrates that the correlation indicated by the data is much smaller than that specified in the Basel II regulations.  相似文献   

2.
Statistical models are sometimes incorporated into computer software for making predictions about future observations. When the computer model consists of a single statistical model this corresponds to estimation of a function of the model parameters. This paper is concerned with the case that the computer model implements multiple, individually-estimated statistical sub-models. This case frequently arises, for example, in models for medical decision making that derive parameter information from multiple clinical studies. We develop a method for calculating the posterior mean of a function of the parameter vectors of multiple statistical models that is easy to implement in computer software, has high asymptotic accuracy, and has a computational cost linear in the total number of model parameters. The formula is then used to derive a general result about posterior estimation across multiple models. The utility of the results is illustrated by application to clinical software that estimates the risk of fatal coronary disease in people with diabetes.  相似文献   

3.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   

4.
A Bayesian estimator based on Franklin's randomized response procedure is proposed for proportion estimation in surveys dealing with a sensitive character. The method is simple to implement and avoids the usual drawbacks of Franklin's estimator, i.e., the occurrence of negative estimates when the population proportion is small. A simulation study is considered in order to assess the performance of the proposed estimator as well as the corresponding credible interval.  相似文献   

5.
隐马尔可夫模型对于异质纵向数据的处理有良好的效果,因此被广泛应用于工程技术、生物医学、经济管理等领域。文章引入了一种特殊的非齐次隐马尔可夫状态转移方式,并将其与经典的多元线性回归相结合,提出了隐非齐次马尔可夫多元线性回归模型,介绍了对该模型进行贝叶斯推断的方法原理和技术细节。最后,通过两个模拟实验说明了推断方法的结果是可靠的。  相似文献   

6.
A multivariate generalized autoregressive conditional heteroscedasticity model with dynamic conditional correlations is proposed, in which the individual conditional volatilities follow exponential generalized autoregressive conditional heteroscedasticity models and the standardized innovations follow a mixture of Gaussian distributions. Inference on the model parameters and prediction of future volatilities are addressed by both maximum likelihood and Bayesian estimation methods. Estimation of the Value at Risk of a given portfolio and selection of optimal portfolios under the proposed specification are addressed. The good performance of the proposed methodology is illustrated via Monte Carlo experiments and the analysis of the daily closing prices of the Dow Jones and NASDAQ indexes.  相似文献   

7.
Estimation and Properties of a Time-Varying EGARCH(1,1) in Mean Model   总被引:1,自引:1,他引:0  
Time-varying GARCH-M models are commonly employed in econometrics and financial economics. Yet the recursive nature of the conditional variance makes likelihood analysis of these models computationally infeasible. This article outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only O(T) computational operations, where T is the sample size. Furthermore, the theoretical dynamic properties of a time-varying-parameter EGARCH(1,1)-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.  相似文献   

8.
The purpose of this paper is to develop a Bayesian analysis for the zero-inflated hyper-Poisson model. Markov chain Monte Carlo methods are used to develop a Bayesian procedure for the model and the Bayes estimators are compared by simulation with the maximum-likelihood estimators. Regression modeling and model selection are also discussed and case deletion influence diagnostics are developed for the joint posterior distribution based on the functional Bregman divergence, which includes ψ-divergence and several others, divergence measures, such as the Itakura–Saito, Kullback–Leibler, and χ2 divergence measures. Performance of our approach is illustrated in artificial, real apple cultivation experiment data, related to apple cultivation.  相似文献   

9.
To demonstrate the treatment effect on structural damage in rheumatoid arthritis (RA) and psoriatic arthritis (PsA), radiographic images of hands and feet are scored according to Sharp scoring systems in randomized clinical trials. However, the quantification of such an effect is challenging because the overall mean progression is lack of clinical interpretation. This article attempts to shed a light on the statistical challenges resulted from its scoring methods and heterogeneity of the study population and proposes a mixture distribution model approach to fit radiographic progression data. With such a model, the drug effect is fully captured by the mean progression of those patients who would progress in the study period under the control treatment. The resulting regression model also lends a tool in examining prognostic factors for radiographic progression. Simulations have been carried out to evaluate the precision of the parameter estimation procedure. Using the data examples from RA and PsA, we will show that the mixture distribution approach provides a better goodness of fit and leads to a casual inference of the study drug, hence a clinically meaningful interpretation.  相似文献   

10.
A family of threshold nonlinear generalised autoregressive conditionally heteroscedastic models is considered, that allows smooth transitions between regimes, capturing size asymmetry via an exponential smooth transition function. A Bayesian approach is taken and an efficient adaptive sampling scheme is employed for inference, including a novel extension to a recently proposed prior for the smoothing parameter that solves a likelihood identification problem. A simulation study illustrates that the sampling scheme performs well, with the chosen prior kept close to uninformative, while successfully ensuring identification of model parameters and accurate inference for the smoothing parameter. An empirical study confirms the potential suitability of the model, highlighting the presence of both mean and volatility (size) asymmetry; while the model is favoured over modern, popular model competitors, including those with sign asymmetry, via the deviance information criterion.  相似文献   

11.
Variational Bayes (VB) estimation is a fast alternative to Markov Chain Monte Carlo for performing approximate Baesian inference. This procedure can be an efficient and effective means of analyzing large datasets. However, VB estimation is often criticised, typically on empirical grounds, for being unable to produce valid statistical inferences. In this article we refute this criticism for one of the simplest models where Bayesian inference is not analytically tractable, that is, the Bayesian linear model (for a particular choice of priors). We prove that under mild regularity conditions, VB based estimators enjoy some desirable frequentist properties such as consistency and can be used to obtain asymptotically valid standard errors. In addition to these results we introduce two VB information criteria: the variational Akaike information criterion and the variational Bayesian information criterion. We show that variational Akaike information criterion is asymptotically equivalent to the frequentist Akaike information criterion and that the variational Bayesian information criterion is first order equivalent to the Bayesian information criterion in linear regression. These results motivate the potential use of the variational information criteria for more complex models. We support our theoretical results with numerical examples.  相似文献   

12.
This article develops a framework for estimating multivariate treatment effect models in the presence of sample selection. The methodology deals with several important issues prevalent in policy and program evaluation, including application and approval stages, nonrandom treatment assignment, endogeneity, and discrete outcomes. This article presents a computationally efficient estimation algorithm and techniques for model comparison and treatment effects. The framework is applied to evaluate the effectiveness of bank recapitalization programs and their ability to resuscitate the financial system. The analysis of lender of last resort (LOLR) policies is not only complicated due to econometric challenges, but also because regulator data are not easily obtainable. Motivated by these difficulties, this article constructs a novel bank-level dataset and employs the new methodology to jointly model a bank’s decision to apply for assistance, the LOLR’s decision to approve or decline the assistance, and the bank’s performance following the disbursements. The article offers practical estimation tools to unveil new answers to important regulatory and policy questions.  相似文献   

13.
Just as frequentist hypothesis tests have been developed to check model assumptions, prior predictive p-values and other Bayesian p-values check prior distributions as well as other model assumptions. These model checks not only suffer from the usual threshold dependence of p-values, but also from the suppression of model uncertainty in subsequent inference. One solution is to transform Bayesian and frequentist p-values for model assessment into a fiducial distribution across the models. Averaging the Bayesian or frequentist posterior distributions with respect to the fiducial distribution can reproduce results from Bayesian model averaging or classical fiducial inference.  相似文献   

14.
The evaluation of hazards from complex, large scale, technologically advanced systems often requires the construction of computer implemented mathematical models. These models are used to evaluate the safety of the systems and to evaluate the consequences of modifications to the systems. These evaluations, however, are normally surrounded by significant uncertainties related to the uncertainty inherent in natural phenomena such as the weather and those related to uncertainties in the parameters and models used in the evaluation.

Another use of these models is to evaluate strategies for improving information used in the modeling process itself. While sensitivity analysis is useful in defining variables in the model that are important, uncertainty analysis provides a tool for assessing the importance of uncertainty about these variables. A third complementary technique, is decision analysis. It provides a methodology for explicitly evaluating and ranking potential improvements to the model. Its use in the development of information gathering strategies for a nuclear waste repository are discussed in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号