首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Experience ratemaking plays a crucial role in general insurance in determining future premiums of individuals in a portfolio by assessing observed claims from the whole portfolio. This paper investigates this problem in which claims can be modeled by certain parametric family of distributions. The Dirichlet process mixtures are employed to model the distributions of the parameters so as to make two advantages: to produce exact Bayesian experience premiums for a class of premium principles generated from generic error functions and, at the same time, provide robust and flexible ways to avoid possible bias caused by traditionally used priors such as non informative priors or conjugate priors. In this paper, the Bayesian experience ratemaking under Dirichlet process mixture models are investigated and due to the lack of analytical forms of the conditional expectations of the quantities concerned, the Gibbs sampling schemes are designed for the purpose of approximations.  相似文献   

2.
In this article, we study the problem of selecting the best population from among several exponential populations based on interval censored samples using a Bayesian approach. A Bayes selection procedure and a curtailed Bayes selection procedure are derived. We show that these two Bayes selection procedures are equivalent. A numerical example is provided to illustrate the application of the two selection procedure. We also use Monte Carlo simulation to study performance of the two selection procedures. The numerical results of the simulation study demonstrate that the curtailed Bayes selection procedure has good performance because it can substantially reduce the duration time of life test experiment.  相似文献   

3.
ABSTRACT

We consider multiple regression (MR) model averaging using the focused information criterion (FIC). Our approach is motivated by the problem of implementing a mean-variance portfolio choice rule. The usual approach is to estimate parameters ignoring the intention to use them in portfolio choice. We develop an estimation method that focuses on the trading rule of interest. Asymptotic distributions of submodel estimators in the MR case are derived using a localization framework. The localization is of both regression coefficients and error covariances. Distributions of submodel estimators are used for model selection with the FIC. This allows comparison of submodels using the risk of portfolio rule estimators. FIC model averaging estimators are then characterized. This extension further improves risk properties. We show in simulations that applying these methods in the portfolio choice case results in improved estimates compared with several competitors. An application to futures data shows superior performance as well.  相似文献   

4.
To improve the out-of-sample performance of the portfolio, Lasso regularization is incorporated to the Mean Absolute Deviance (MAD)-based portfolio selection method. It is shown that such a portfolio selection problem can be reformulated as a constrained Least Absolute Deviance problem with linear equality constraints. Moreover, we propose a new descent algorithm based on the ideas of ‘nonsmooth optimality conditions’ and ‘basis descent direction set’. The resulting MAD-Lasso method enjoys at least two advantages. First, it does not involve the estimation of covariance matrix that is difficult particularly in the high-dimensional settings. Second, sparsity is encouraged. This means that assets with weights close to zero in the Markovwitz's portfolio are driven to zero automatically. This reduces the management cost of the portfolio. Extensive simulation and real data examples indicate that if the Lasso regularization is incorporated, MAD portfolio selection method is consistently improved in terms of out-of-sample performance, measured by Sharpe ratio and sparsity. Moreover, simulation results suggest that the proposed descent algorithm is more time-efficient than interior point method and ADMM algorithm.  相似文献   

5.
Large pharmaceutical companies maintain a portfolio of assets, some of which are projects under development while others are on the market and generating revenue. The budget allocated to R&D may not always be sufficient to fund all the available projects for development. Much attention has been paid to the selection of optimal subsets of available projects to fit within the available budget. In this paper, we argue the need for a forward-looking approach to portfolio decision-making. We develop a quantitative model that allows the portfolio management to evaluate the need for future inflow of new projects to achieve revenue at desired levels, often aspiring to a certain annual revenue growth. Optimisation methods are developed for the presented model, allowing an optimal choice of number, timing and type of projects to be added to the portfolio. The proposed methodology allows for a proactive approach to portfolio management, prioritisation, and optimisation. It provides a quantitatively based support for strategic decisions regarding the efforts needed to secure the future development pipeline and revenue stream of the company.  相似文献   

6.
均值-VaR模型是比较复杂的非线性规划问题,传统的算法不能保证得到全局最优值。鉴于此,引入遗传算法求解资产配置比例。对基于均值-VaR的单目标优化问题,设计了限定搜索空间和惩罚函数的遗传算法;而对多目标优化问题,应用并行选择遗传算法,并以沪深300行业分类指数构建投资组合,分析了行业资产配置的投资组合问题。结果表明,算法取得了良好的效果,解的结果既满足了投资的目标和约束条件,又反映了投资者之间不同的收益风险需求,且具有较好的实践性。  相似文献   

7.
In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes, it poses difficulties for testing problems. This paper generates sensible intrinsic and fractional prior distributions for the Behrens–Fisher testing problem from the improper priors commonly used for estimation. It allows us to compute the Bayes factor to compare the null and the alternative hypotheses. This default procedure of model selection is compared with a frequentist test and the Bayesian information criterion. We find discrepancy in the sense that frequentist and Bayesian information criterion reject the null hypothesis for data, that the Bayes factor for intrinsic or fractional priors do not.  相似文献   

8.
Finite mixture of regression (FMR) models are aimed at characterizing subpopulation heterogeneity stemming from different sets of covariates that impact different groups in a population. We address the contemporary problem of simultaneously conducting covariate selection and determining the number of mixture components from a Bayesian perspective that can incorporate prior information. We propose a Gibbs sampling algorithm with reversible jump Markov chain Monte Carlo implementation to accomplish concurrent covariate selection and mixture component determination in FMR models. Our Bayesian approach contains innovative features compared to previously developed reversible jump algorithms. In addition, we introduce component-adaptive weighted g priors for regression coefficients, and illustrate their improved performance in covariate selection. Numerical studies show that the Gibbs sampler with reversible jump implementation performs well, and that the proposed weighted priors can be superior to non-adaptive unweighted priors.  相似文献   

9.
Abstract

In the model selection problem, the consistency of the selection criterion has been often discussed. This paper derives a family of criteria based on a robust statistical divergence family by using a generalized Bayesian procedure. The proposed family can achieve both consistency and robustness at the same time since it has good performance with respect to contamination by outliers under appropriate circumstances. We show the selection accuracy of the proposed criterion family compared with the conventional methods through numerical experiments.  相似文献   

10.
In this article, we develop a Bayesian analysis in autoregressive model with explanatory variables. When σ2 is known, we consider a normal prior and give the Bayesian estimator for the regression coefficients of the model. For the case σ2 is unknown, another Bayesian estimator is given for all unknown parameters under a conjugate prior. Bayesian model selection problem is also being considered under the double-exponential priors. By the convergence of ρ-mixing sequence, the consistency and asymptotic normality of the Bayesian estimators of the regression coefficients are proved. Simulation results indicate that our Bayesian estimators are not strongly dependent on the priors, and are robust.  相似文献   

11.
In this article we present a technique for implementing large-scale optimal portfolio selection. We use high-frequency daily data to capture valuable statistical information in asset returns. We describe several statistical issues involved in quantitative approaches to portfolio selection. Our methodology applies to large-scale portfolio-selection problems in which the number of possible holdings is large relative to the estimation period provided by historical data. We illustrate our approach on an equity database that consists of stocks from the Standard and Poor's index, and we compare our portfolios to this benchmark index. Our methodology differs from the usual quadratic programming approach to portfolio selection in three ways: (1) We employ informative priors on the expected returns and variance-covariance matrices, (2) we use daily data for estimation purposes, with upper and lower holding limits for individual securities, and (3) we use a dynamic asset-allocation approach that is based on reestimating and then rebalancing the portfolio weights on a prespecified time window. The key inputs to the optimization process are the predictive distributions of expected returns and the predictive variance-covariance matrix. We describe the statistical issues involved in modeling these inputs for high-dimensional portfolio problems in which our data frequency is daily. In our application, we find that our optimal portfolio outperforms the underlying benchmark.  相似文献   

12.
A multivariate generalized autoregressive conditional heteroscedasticity model with dynamic conditional correlations is proposed, in which the individual conditional volatilities follow exponential generalized autoregressive conditional heteroscedasticity models and the standardized innovations follow a mixture of Gaussian distributions. Inference on the model parameters and prediction of future volatilities are addressed by both maximum likelihood and Bayesian estimation methods. Estimation of the Value at Risk of a given portfolio and selection of optimal portfolios under the proposed specification are addressed. The good performance of the proposed methodology is illustrated via Monte Carlo experiments and the analysis of the daily closing prices of the Dow Jones and NASDAQ indexes.  相似文献   

13.
Abstract

Covariance estimation and selection for multivariate datasets in a high-dimensional regime is a fundamental problem in modern statistics. Gaussian graphical models are a popular class of models used for this purpose. Current Bayesian methods for inverse covariance matrix estimation under Gaussian graphical models require the underlying graph and hence the ordering of variables to be known. However, in practice, such information on the true underlying model is often unavailable. We therefore propose a novel permutation-based Bayesian approach to tackle the unknown variable ordering issue. In particular, we utilize multiple maximum a posteriori estimates under the DAG-Wishart prior for each permutation, and subsequently construct the final estimate of the inverse covariance matrix. The proposed estimator has smaller variability and yields order-invariant property. We establish posterior convergence rates under mild assumptions and illustrate that our method outperforms existing approaches in estimating the inverse covariance matrices via simulation studies.  相似文献   

14.
In a recent article by Qi, neural networks trained by Bayesian regularization were used to predict excess returns on the S&P 500. The article concluded that the switching portfolio based on the recursive neural-network forecasts generates higher accumulated wealth with lower risks than that based on linear regression. Unfortunately, attempts to replicate the results were unsuccessful. Replicated results using the same software, approach and data detailed by Qi indicate that, in fact, the switching portfolio based on the recursive neural-network forecasts generates lower accumulated wealth with higher risks than that based on linear regression.  相似文献   

15.
In the Bayesian approach to parametric model comparison, the use of improper priors is problematic due to the indeterminacy of the resulting Bayes factor (BF). The need for developing automatic and robust methods for model comparison has led to the introduction of alternative BFs. Intrinsic Bayes factors (Berger and Pericchi, 1996a) and fractional Bayes factors (FBF) (O'Hagan, 1995) are two alternative strategies for default model selection. We show in this paper that the FBF can be inconsistent. To overcome this problem, we propose a generalization of the FBF approach that leads to the usual FBF or to some variants of it in some special cases. As an important problem, we consider and discuss this generalization for model selection in nested linear models.  相似文献   

16.
A hierarchical Bayesian approach to the problem of comparison of two means is considered. Hypothesis testing, ranking and selection, and estimation (after selection) are treated. Under the hypothesis that two means are different, it is desired to select the population which has the larger mean. Expressions for the ranking probability of each mean being the larger and the corresponding estimate of each mean are given. For certain priors, it is possible to express the quantities of interest in closed form. A simulation study has been done to compare mean square errors of a hierarchical Bayesian estimator and some of the existing estimators of the selected mean. The case of comparing two means in the presence of block effects has also been considered and an example is presented to illustrate the methodology.  相似文献   

17.
The variational approach to Bayesian inference enables simultaneous estimation of model parameters and model complexity. An interesting feature of this approach is that it also leads to an automatic choice of model complexity. Empirical results from the analysis of hidden Markov models with Gaussian observation densities illustrate this. If the variational algorithm is initialized with a large number of hidden states, redundant states are eliminated as the method converges to a solution, thereby leading to a selection of the number of hidden states. In addition, through the use of a variational approximation, the deviance information criterion for Bayesian model selection can be extended to the hidden Markov model framework. Calculation of the deviance information criterion provides a further tool for model selection, which can be used in conjunction with the variational approach.  相似文献   

18.
The two-sample problem of inferring whether two random samples have equal underlying distributions is formulated within the Bayesian framework as a comparison of two posterior predictive inferences rather than as a problem of model selection. The suggested approach is argued to be particularly advantageous in problems where the objective is to evaluate evidence in support of equality, along with being robust to the priors used and being capable of handling improper priors. Our approach is contrasted with the Bayes factor in a normal setting and finally, an additional example is considered where the observed samples are realizations of Markov chains.  相似文献   

19.
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial.  相似文献   

20.
Summary.  Existing Bayesian model selection procedures require the specification of prior distributions on the parameters appearing in every model in the selection set. In practice, this requirement limits the application of Bayesian model selection methodology. To overcome this limitation, we propose a new approach towards Bayesian model selection that uses classical test statistics to compute Bayes factors between possible models. In several test cases, our approach produces results that are similar to previously proposed Bayesian model selection and model averaging techniques in which prior distributions were carefully chosen. In addition to eliminating the requirement to specify complicated prior distributions, this method offers important computational and algorithmic advantages over existing simulation-based methods. Because it is easy to evaluate the operating characteristics of this procedure for a given sample size and specified number of covariates, our method facilitates the selection of hyperparameter values through prior-predictive simulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号