首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 0 毫秒
1.
We review recent research on time-varying risk premiums, including attempts to explain rejections by baillie and others of the unbiasedness hypothesis. Using spot and forward foreign exchange rates we discuss the evidence for time-varying risk premiums, relate it to general equilibrium theories of asset pricing, and describe the artificial economy methodology.  相似文献   

2.
The present paper is concerned with some results in cohort studies, in which the individuals in two study population are exposed simultaneously to several risks of death, which compete for their lives.

The morality experience of individuals in the two study populations is compared with respect to the morality experience of individuals in a well-defined and fixed population called the standard population.

Under some reasonable assumptions, not only simple variance formulas are-developed for the standardized risk ratio statistics (S[Rcirc]Ri) but also their joint asymptotic sampling distribution. It is demonstrated that these SRcirc;Ri's have asymptotically a multivariate normal distribtion corresponding to any given number of competing risks of death, These results are utilized to construct Scheffé-type and Sidak-type simultaneous confidence intervals for the SRRi parameters which hold regardless of any covariance structure among the competing risks of death. The corresponding results for the cause-specific SMR and the externally standardized risk ratio parameters follow as special cases.

The present paper generalizes the available results in the literature in two directions, namely, to obtain simple variance formulas for the S[Rcirc]Ri, statistics and to treat the situation in the presence of competing risks to which individuals in a study are simultaneously exposed.

An empirical evaluation of these results is discussed in the last section utilizing some real cohort data from two recent occupational epidemiologic cohort studies.  相似文献   

3.
In a ground-breaking paper published in 1990 by the Journal of the Royal Statistical Society, J.R.M. Hosking defined the L-moment of a random variable as an expectation of certain linear combinations of order statistics. L-moments are an alternative to conventional moments and recently they have been used often in inferential statistics. L-moments have several advantages over the conventional moments, including robustness to the the presence of outliers, which may lead to more accurate estimates in some cases as the characteristics of distributions. In this contribution, asymptotic theory and L-moments are used to derive confidence intervals of the population parameters and quantiles of the three-parametric generalized Pareto and extreme-value distributions. Computer simulations are performed to determine the performance of confidence intervals for the population quantiles based on L-moments and to compare them to those obtained by traditional estimation techniques. The results obtained show that they perform well in comparison to the moments and maximum likelihood methods when the interest is in higher quantiles, or even best. L-moments are especially recommended when the tail of the distribution is rather heavier and the sample size is small. The derived intervals are applied to real economic data, and specifically to market-opening asset prices.  相似文献   

4.
This article addresses the problem of the bias of income and expenditure elasticities estimated on pseudopanel data caused by measurement error and unobserved heterogeneity. We gauge these biases empirically by comparing cross-sectional, pseudo-panel, and true panel data from both Polish and U.S. expenditure surveys. Our results suggest that unobserved heterogeneity imparts a downward bias to cross-section estimates of income elasticities of at-home food expenditures and an upward bias to estimates of income elasticities of away-from-home food expenditures. “Within” and first-difference estimators suffer less bias, but only if the effects of measurement error are accounted for with instrumental variables.  相似文献   

5.
本文采用二元VAR-GARCH-BEKK模型,对中国大蒜现货市场和电子交易市场间的波动溢出效应进行了分析,研究发现:既存在现货市场向电子交易市场单向的波动溢出效应,也存在电子交易市场向现货市场的单向波动溢出效应,同时两个市场间还存在着双向的波动溢出效应,并且大蒜电子交易市场向现货市场的波动溢出效应要强于现货市场向电子交易市场的波动溢出效应,两个市场间的波动溢出效应主要是由电子交易市场向现货市场的溢出。  相似文献   

6.
The use of GARCH type models and computational-intelligence-based techniques for forecasting financial time series has been proved extremely successful in recent times. In this article, we apply the finite mixture of ARMA-GARCH model instead of AR or ARMA models to compare with the standard BP and SVM in forecasting financial time series (daily stock market index returns and exchange rate returns). We do not apply the pure GARCH model as the finite mixture of the ARMA-GARCH model outperforms the pure GARCH model. These models are evaluated on five performance metrics or criteria. Our experiment shows that the SVM model outperforms both the finite mixture of ARMA-GARCH and BP models in deviation performance criteria. In direction performance criteria, the finite mixture of ARMA-GARCH model performs better. The memory property of these forecasting techniques is also examined using the behavior of forecasted values vis-à-vis the original values. Only the SVM model shows long memory property in forecasting financial returns.  相似文献   

7.
利用住房抵押贷款的基本定价模型及美国金融危机时期各相关变量的实际数据,对美国金融危机中的房价、房价波动率及利率三个直接初始因素进行敏感性作用力度弹性分析,探讨了房价、房价波动率及利率三个直接初始因素对住房抵押贷款价值与信用风险的敏感性与作用机理。  相似文献   

8.
金融风险度量VaR与CVaR方法的比较研究及应用   总被引:1,自引:0,他引:1  
风险价值(VaR)是近年来受到国际金融界的广泛支持和认可的一种度量金融风险的工具。文章指出了风险价值(VaR)模型两个重大的缺陷,并对它和条件风险价值(CVaR)金融风险度量模型进行了详细的介绍和对比分析,给出了它们的共同点和CVaR在投资组合应用中的优势,结合中国金融市场的实际情况,指出CvaR在中国金融市场中应用应注意的问题,对其应用前景提出了新的思路。  相似文献   

9.
The importance of Logistic distribution has been widely recognized in many applied areas such as, demography, population studies, finance, agriculture, etc. Since its introduction as a model, much attention has been paid to the study of several generalizations of it, which would offer additional flexibility when data fitting is chased. In the present paper we introduce and develop a natural generalization of the Logistic distribution by considering a probability model whose logit cumulative distribution function transformation is of polynomial type. The performance of the model's fitting to financial data, using different parameter estimation methods, is also investigated.  相似文献   

10.
Abstract.  We consider the design problem for the estimation of several scalar measures suggested in the epidemiological literature for comparing the success rate in two samples. The designs considered so far in the literature are local in the sense that they depend on the unknown probabilities of success in the two groups and are not necessarily robust with respect to their misspecification. A maximin approach is proposed to obtain efficient and robust designs for the estimation of the relative risk, attributable risk and odds ratio, whenever a range for the success rates can be specified by the experimenter. It is demonstrated that the designs obtained by this method are usually more efficient than the commonly used uniform design, which allocates equal sample sizes to the two groups.  相似文献   

11.
This article mainly investigates risk-minimizing European currency option pricing and hedging strategy when the spot foreign exchange rate is driven by a Markov-modulated jump-diffusion model. We suppose the domestic and foreign money market floating interest rates, the drift, and the volatility of the exchange rate dynamics all depend on the state of the economy, which is modeled by a continuous-time hidden Markov chain. The model considered in this article will provide market practitioners with flexibility in characterizing the dynamics of the spot foreign exchange rate. Using the minimal martingale measure, we obtain a system of coupled partial-differential-integral equations satisfied by the currency option price and find the corresponding hedging strategies and the residual risk. According to simulation of currency option prices in the special case of double exponential jump-diffusion regime-switching model, we further discuss and show the effects of the parameters on the prices.  相似文献   

12.
以沿海11省市的风暴潮灾害风险为研究对象,采用遗传与粒子群混合算法对投影寻踪动态聚类(PPDC)模型进行优化,将粗糙集理论(RST)与修正的PPDC模型组合运用,对中国沿海地区风暴潮灾害的风险进行综合评估与区域等级划分。实证结果表明:广东和福建两省是中国风暴潮灾害的高风险区,风险评估值超过2.5,山东、浙江、海南和广西属于风暴潮灾害的中风险区,风险评估值处于[1.8,2.2]之间,江苏、天津、辽宁、河北和上海属于风暴潮灾害的低风险区,风险评估值低于1.5。研究结论为国家实施差异化的灾害风险管理战略提供了思路与参考。  相似文献   

13.
In this article, we consider a discrete-time risk model with insurance and financial risks. We derive some refinements of a general asymptotic formula for the finite-time ruin probability under the assumptions that the net losses follow a common distribution in the intersection between the subexponential class and the Gumbel maximum domain of attraction, and the stochastic discount factors of the risky asset have a common distribution with extended regular variation. The obtained asymptotic upper and lower bounds are transparent and computable.  相似文献   

14.
从投资环境的稳定性、资本流动的合理性及金融市场的有效性三个层次构建三角模型,考察中国对哈萨克斯坦资本流动可能存在的风险问题。研究结果认为:中国对哈萨克斯坦资本流动的整体风险容易受到国际环境及哈萨克斯坦内部环境变化的影响,2008年以后整体风险开始上升。采用GM(1,1)模型对2015—2020年资本流动的风险状态进行预测,结果显示:未来几年中国对哈萨克斯坦资本流动的风险状态主要处于弱安全区,哈萨克斯坦的投资环境、资本流动结构、金融市场稳定性均存在不同程度的恶化,存在较大的资本流动风险。  相似文献   

15.
The accuracy of a binary diagnostic test is usually measured in terms of its sensitivity and its specificity, or through positive and negative predictive values. Another way to describe the validity of a binary diagnostic test is the risk of error and the kappa coefficient of the risk of error. The risk of error is the average loss that is caused when incorrectly classifying a non-diseased or a diseased patient, and the kappa coefficient of the risk of error is a measure of the agreement between the diagnostic test and the gold standard. In the presence of partial verification of the disease, the disease status of some patients is unknown, and therefore the evaluation of a diagnostic test cannot be carried out through the traditional method. In this paper, we have deduced the maximum likelihood estimators and variances of the risk of error and of the kappa coefficient of the risk of error in the presence of partial verification of the disease. Simulation experiments have been carried out to study the effect of the verification probabilities on the coverage of the confidence interval of the kappa coefficient.  相似文献   

16.
We derive Bayesian interval estimators for the differences in the true positive rates and false positive rates of two dichotomous diagnostic tests applied to the members of two distinct populations. The populations have varying disease prevalences with unverified negatives. We compare the performance of the Bayesian credible interval to the Wald interval using Monte Carlo simulation for a spectrum of different TPRs, FPRs, and sample sizes. For the case of a low TPR and low FPR, we found that a Bayesian credible interval with relatively noninformative priors performed well. We obtain similar interval comparison results for the cases of a high TPR and high FPR, a high TPR and low FPR, and of a high TPR and mixed FPR after incorporating mildly informative priors.  相似文献   

17.
In clinical trials with binary endpoints, the required sample size does not depend only on the specified type I error rate, the desired power and the treatment effect but also on the overall event rate which, however, is usually uncertain. The internal pilot study design has been proposed to overcome this difficulty. Here, nuisance parameters required for sample size calculation are re-estimated during the ongoing trial and the sample size is recalculated accordingly. We performed extensive simulation studies to investigate the characteristics of the internal pilot study design for two-group superiority trials where the treatment effect is captured by the relative risk. As the performance of the sample size recalculation procedure crucially depends on the accuracy of the applied sample size formula, we firstly explored the precision of three approximate sample size formulae proposed in the literature for this situation. It turned out that the unequal variance asymptotic normal formula outperforms the other two, especially in case of unbalanced sample size allocation. Using this formula for sample size recalculation in the internal pilot study design assures that the desired power is achieved even if the overall rate is mis-specified in the planning phase. The maximum inflation of the type I error rate observed for the internal pilot study design is small and lies below the maximum excess that occurred for the fixed sample size design.  相似文献   

18.
This study investigates the performance of parametric and nonparametric tests to analyze repeated measures designs. Both multivariate normal and exponential distributions were simulated for varying values of the correlation and ten or twenty subjects within each cell. For multivariate normal distributions, the type I error rates were lower than the usual 0.05 level for nonparametric tests, whereas the parametric tests without the Greenhouse-Geisser or the Huynh-Feldt adjustment produced slightly higher type I error rates. Type I error rates for nonparametric tests, for multivariate exponential distributions, were more stable than parametric, Greenhouse-Geisser or Huynh-Feldt adjusted tests. For ten subjects within each cell, the parametric tests were more powerful than nonparametric tests. For twenty subjects per cell, the power of the nonparametric and parametric tests was comparable.  相似文献   

19.
This article considers the shrinkage estimation procedure in the Cox's proportional hazards regression model when it is suspected that some of the parameters may be restricted to a subspace. We have developed the statistical properties of the shrinkage estimators including asymptotic distributional biases and risks. The shrinkage estimators have much higher relative efficiency than the classical estimator, furthermore, we consider two penalty estimators—the LASSO and adaptive LASSO—and compare their relative performance with that of the shrinkage estimators numerically. A Monte Carlo simulation experiment is conducted for different combinations of irrelevant predictors and the performance of each estimator is evaluated in terms of simulated mean squared error. Simulation study shows that the shrinkage estimators are comparable to the penalty estimators when the number of irrelevant predictors in the model is relatively large. The shrinkage and penalty methods are applied to two real data sets to illustrate the usefulness of the procedures in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号