首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Use of probability distributions by regulatory agencies often focuses on the extreme events and scenarios that correspond to the tail of probability distributions. This paper makes the case that assessment of the tail of the distribution can and often should be performed separately from assessment of the central values. Factors to consider when developing distributions that account for tail behavior include (a) the availability of data, (b) characteristics of the tail of the distribution, and (c) the value of additional information in assessment. The integration of these elements will improve the modeling of extreme events by the tail of distributions, thereby providing policy makers with critical information on the risk of extreme events. Two examples provide insight into the theme of the paper. The first demonstrates the need for a parallel analysis that separates the extreme events from the central values. The second shows a link between the selection of the tail distribution and a decision criterion. In addition, the phenomenon of breaking records in time-series data gives insight to the information that characterizes extreme values. One methodology for treating risk of extreme events explicitly adopts the conditional expected value as a measure of risk. Theoretical results concerning this measure are given to clarify some of the concepts of the risk of extreme events.  相似文献   

2.
The neurotoxic effects of chemical agents are often investigated in controlled studies on rodents, with binary and continuous multiple endpoints routinely collected. One goal is to conduct quantitative risk assessment to determine safe dose levels. Yu and Catalano (2005) describe a method for quantitative risk assessment for bivariate continuous outcomes by extending a univariate method of percentile regression. The model is likelihood based and allows for separate dose‐response models for each outcome while accounting for the bivariate correlation. The approach to benchmark dose (BMD) estimation is analogous to that for quantal data without having to specify arbitrary cutoff values. In this article, we evaluate the behavior of the BMD relative to background rates, sample size, level of bivariate correlation, dose‐response trend, and distributional assumptions. Using simulations, we explore the effects of these factors on the resulting BMD and BMDL distributions. In addition, we illustrate our method with data from a neurotoxicity study of parathion exposure in rats.  相似文献   

3.
A model is developed for the detection time of fires in nuclear power plants, which differentiates between competing modes of detection and between different initial fire severities. Our state-of-knowledge uncertainties in the values of the model parameters are assessed from industry experience using Bayesian methods. Because the available data are sparse, we propose means to interpret imprecise forms of evidence to the develop quantitative information, which can be used in a statistical analysis; the intent is to maximize our use of all available information. Sensitivity analyses are performed to indicate the importance of structural and distributional assumptions made in the study. The methods used to treat imprecise evidence can be applied to a wide variety of problems. The specific equations developed in this analysis are useful in general situations, where the random quantity of interest is the minimum of a set of random variables (e.g., in "competing risks" models). The computational results indicate that the competing modes formulation can lead to distributions different from those obtained via analytically simpler models, which treat each mode independently of the others.  相似文献   

4.
由于风险价值、条件风险价值等下方风险度量没有考虑尾部数据的变异性,因此在刻画极端金融风险方面存在一定的缺陷。为了更好地控制尾部极端损失的发生概率,我们选择用尾部条件方差来刻画这种极端风险,即超过风险价值的那部分损失的方差。考虑到混合椭球分布在金融数据建模中的重要性,本文在这类分布下研究了证券组合的尾部条件方差,得到了证券组合尾部条件方差风险的精确表达式,为了验证本文的结果,我们也进行了一些数值计算及在最优投资组合方面的应用研究。  相似文献   

5.
6.
We consider forecasting with uncertainty about the choice of predictor variables. The researcher wants to select a model, estimate the parameters, and use the parameter estimates for forecasting. We investigate the distributional properties of a number of different schemes for model choice and parameter estimation, including: in‐sample model selection using the Akaike information criterion; out‐of‐sample model selection; and splitting the data into subsamples for model selection and parameter estimation. Using a weak‐predictor local asymptotic scheme, we provide a representation result that facilitates comparison of the distributional properties of the procedures and their associated forecast risks. This representation isolates the source of inefficiency in some of these procedures. We develop a simulation procedure that improves the accuracy of the out‐of‐sample and split‐sample methods uniformly over the local parameter space. We also examine how bootstrap aggregation (bagging) affects the local asymptotic risk of the estimators and their associated forecasts. Numerically, we find that for many values of the local parameter, the out‐of‐sample and split‐sample schemes perform poorly if implemented in the conventional way. But they perform well, if implemented in conjunction with our risk‐reduction method or bagging.  相似文献   

7.
Using expected values to simplify decision making under uncertainty   总被引:2,自引:0,他引:2  
Ian N. Durbach  Theodor J. Stewart   《Omega》2009,37(2):312-330
A simulation study examines the impact of a simplification strategy that replaces distributional attribute evaluations with their expected values and uses those expectations in an additive value model. Several alternate simplified forms and approximation approaches are investigated, with results showing that in general the simplified models are able to provide acceptable performance that is fairly robust to a variety of internal and external environmental changes, including changes to the distributional forms of the attribute evaluations, errors in the assessment of the expected values, and problem size. Certain of the simplified models are shown to be highly sensitive to the form of the underlying preference functions, and in particular to extreme non-linearity in these preferences.  相似文献   

8.
收益率与波动率之间的关系(风险补偿)在金融资产定价和风险管理中至关重要,但二者之间的研究却一直没有定论。金融理论表明为正向关系,但实证却往往得到相反的结果。众所周知,尖峰厚尾以及负偏是收益率的典型特性。这些内在属性对风险补偿无疑有着重要的影响。本文研究表明,收益率的负偏和尖峰厚尾会导致风险补偿系数减小,进而导致风险补偿系数在实证当中的不确定性。虽然各个行业的夏普比率相差无几,我国股票市场的第“I”类、第“II”类风险补偿以及风险价格在不同行业之间存在较大的差异,且三者随着投资期限的增长,分别有下降、上升和轻微上升的趋势。  相似文献   

9.
为解决均值-ES(Expected Shortfall)组合投资决策中的计算困难,通过理论证明将其转化为一个Expectile回归问题,进而给出其Expectile回归求解新方法。该方法具有两个方面的优势:第一,Expectile回归的目标函数为二次损失函数,具有连续、光滑等特性,其优化与计算过程简单易行,且具有很好的可扩展性;第二,优化Expectile回归目标函数得到Expectile,利用Expectile与ES之间对应关系,能够准确地得到最优组合投资的ES风险值。选取沪深300指数中具有行业代表性的5支股票进行实证研究,将基于Expectile回归的均值-ES模型与均值-VaR模型、均值-方差模型进行对比,发现前者能够很好地分散组合投资尾部风险大小,显著提高组合投资绩效。  相似文献   

10.
Hendrik Jürges 《LABOUR》2002,16(2):347-381
This paper provides a distributional analysis of the public–private sector wage gap in Germany from 1984 to 1996. The public sector wage distribution is generally less dispersed than the private sector wage distribution. The raw wage differential is positive for males who are at the lower tail of the male wage distribution and negative at the upper tail. In contrast, females enjoy positive wage gaps along most part of the wage distribution. A decomposition analysis reveals that the male wage premium, i.e. the part of the wage gap not accounted for by differences in observable characteristics, is uniformly negative, whereas the female wage premium is positive.  相似文献   

11.
The distributional approach for uncertainty analysis in cancer risk assessment is reviewed and extended. The method considers a combination of bioassay study results, targeted experiments, and expert judgment regarding biological mechanisms to predict a probability distribution for uncertain cancer risks. Probabilities are assigned to alternative model components, including the determination of human carcinogenicity, mode of action, the dosimetry measure for exposure, the mathematical form of the dose‐response relationship, the experimental data set(s) used to fit the relationship, and the formula used for interspecies extrapolation. Alternative software platforms for implementing the method are considered, including Bayesian belief networks (BBNs) that facilitate assignment of prior probabilities, specification of relationships among model components, and identification of all output nodes on the probability tree. The method is demonstrated using the application of Evans, Sielken, and co‐workers for predicting cancer risk from formaldehyde inhalation exposure. Uncertainty distributions are derived for maximum likelihood estimate (MLE) and 95th percentile upper confidence limit (UCL) unit cancer risk estimates, and the effects of resolving selected model uncertainties on these distributions are demonstrated, considering both perfect and partial information for these model components. A method for synthesizing the results of multiple mechanistic studies is introduced, considering the assessed sensitivities and selectivities of the studies for their targeted effects. A highly simplified example is presented illustrating assessment of genotoxicity based on studies of DNA damage response caused by naphthalene and its metabolites. The approach can provide a formal mechanism for synthesizing multiple sources of information using a transparent and replicable weight‐of‐evidence procedure.  相似文献   

12.
In this study, the tail probability of a class of distributions commonly used in assessing the severity of insurance losses was examined. Without specifying any particular distribution, the use of an algebraic functional form Cx(-alpha) to approximate the tail behavior of the distributions in the class was demonstrated. Norwegian fire insurance data were examined, and the algebraic functional form was applied to derive the expected loss of a reinsurance treaty that covers all losses exceeding a retention limit. It was shown that (1) the expected loss is insensitive to the parameter alpha for a high retention limit (e.g., a catastrophe treaty), and (2) with a low retention limit (e.g., a largest claim treaty), a reliable estimate of the parameter alpha and a sound judgment on the maximum potential loss of the treaty could provide useful and defensible summary statistics for pricing the treaty. Thus, when dealing with the losses of certain reinsurance treaties, it was concluded that knowledge of a specific probability distribution is not critical, and the summary statistics derived from the model are robust with respect to a large class of loss distributions.  相似文献   

13.
A two-phase approach is used to examine the impact of job scheduling rules and tool selection policies for a dynamic job shop system in a tool-shared, flexible manufacturing environment. The first phase develops a generalized simulation model and analyses 'simple' job scheduling rules and tool selection policies under various operating scenarios. The results from this investigation are then used to develop and analyse various bi-criteria rules in the second phase of this study. The results show that the scheduling rules have the most significant impact on system performance, particularly at high shop load levels. Tool selection policies affect some of the performance measures, most notably, proportion of tardy jobs, to a lesser degree. Higher machine utilizations can be obtained at higher tool duplication levels but at the expense of increased tooling costs and lower tool utilization. The results also show that using different processing time distributions may have a significant impact on shop performance.  相似文献   

14.
Using Bayesian Networks to Model Expected and Unexpected Operational Losses   总被引:1,自引:0,他引:1  
This report describes the use of Bayesian networks (BNs) to model statistical loss distributions in financial operational risk scenarios. Its focus is on modeling "long" tail, or unexpected, loss events using mixtures of appropriate loss frequency and severity distributions where these mixtures are conditioned on causal variables that model the capability or effectiveness of the underlying controls process. The use of causal modeling is discussed from the perspective of exploiting local expertise about process reliability and formally connecting this knowledge to actual or hypothetical statistical phenomena resulting from the process. This brings the benefit of supplementing sparse data with expert judgment and transforming qualitative knowledge about the process into quantitative predictions. We conclude that BNs can help combine qualitative data from experts and quantitative data from historical loss databases in a principled way and as such they go some way in meeting the requirements of the draft Basel II Accord (Basel, 2004) for an advanced measurement approach (AMA).  相似文献   

15.
Jun Li  Ying Zhou  Yan Ge  Weina Qu 《Risk analysis》2023,43(9):1871-1886
The purpose of this study was to explore the mediating effect of difficulties in emotion regulation on the relationship between sensation seeking and driving behavior based on the dual-process model of aberrant driving behavior. A sample of 299 drivers in China completed the Difficulties in Emotion Regulation Scale, the Driver Behavior Questionnaire, and the Sensation Seeking Scale V (SSS). The relationships among sensation seeking, difficulties in emotion regulation, and driving behavior were investigated using pathway analysis. The results showed that (1) disinhibition and boredom susceptibility are positively and significantly related to difficulties in emotion regulation and risky driving behaviors; (2) difficulties in emotion regulation are positively and significantly associated with risky driving behaviors; (3) difficulties in emotion regulation mediate the effect of sensation seeking on driving behaviors, supporting the dual-process model of driving behavior; and (4) professional drivers score higher in terms of difficulties in emotion regulation and risky driving behaviors than nonprofessional drivers. The findings of this study could provide valuable insights into the selection of suitable drivers and the development of certain programs that benefit road safety.  相似文献   

16.
If the point of view is adopted that in calculations of real-world phenomena we almost invariably have significant uncertainty in the numerical values of our parameters, then, in these calculations, numerical quantities should be replaced by probability distributions and mathematical operations between these quantities should be replaced by analogous operations between probability distributions. Also, practical calculations one way or another always require discretization or truncation. Combining these two thoughts leads to a numerical approach to probabilistic calculations having great simplicity, power, and elegance. The philosophy and technique of this approach is described, some pitfalls are pointed out, and an application to seismic risk assessment is outlined.  相似文献   

17.
Cointegrated bivariate nonstationary time series are considered in a fractional context, without allowance for deterministic trends. Both the observable series and the cointegrating error can be fractional processes. The familiar situation in which the respective integration orders are 1 and 0 is nested, but these values have typically been assumed known. We allow one or more of them to be unknown real values, in which case Robinson and Marinucci (2001, 2003) have justified least squares estimates of the cointegrating vector, as well as narrow‐band frequency‐domain estimates, which may be less biased. While consistent, these estimates do not always have optimal convergence rates, and they have nonstandard limit distributional behavior. We consider estimates formulated in the frequency domain, that consequently allow for a wide variety of (parametric) autocorrelation in the short memory input series, as well as time‐domain estimates based on autoregressive transformation. Both can be interpreted as approximating generalized least squares and Gaussian maximum likelihood estimates. The estimates share the same limiting distribution, having mixed normal asymptotics (yielding Wald test statistics with χ2 null limit distributions), irrespective of whether the integration orders are known or unknown, subject in the latter case to their estimation with adequate rates of convergence. The parameters describing the short memory stationary input series are √n‐consistently estimable, but the assumptions imposed on these series are much more general than ones of autoregressive moving average type. A Monte Carlo study of finite‐sample performance is included.  相似文献   

18.
Losses due to natural hazard events can be extraordinarily high and difficult to cope with. Therefore, there is considerable interest to estimate the potential impact of current and future extreme events at all scales in as much detail as possible. As hazards typically spread over wider areas, risk assessment must take into account interrelations between regions. Neglecting such interdependencies can lead to a severe underestimation of potential losses, especially for extreme events. This underestimation of extreme risk can lead to the failure of riskmanagement strategies when they are most needed, namely, in times of unprecedented events. In this article, we suggest a methodology to incorporate such interdependencies in risk via the use of copulas. We demonstrate that by coupling losses, dependencies can be incorporated in risk analysis, avoiding the underestimation of risk. Based on maximum discharge data of river basins and stream networks, we present and discuss different ways to couple loss distributions of basins while explicitly incorporating tail dependencies. We distinguish between coupling methods that require river structure data for the analysis and those that do not. For the later approach we propose a minimax algorithm to choose coupled basin pairs so that the underestimation of risk is avoided and the use of river structure data is not needed. The proposed methodology is especially useful for large‐scale analysis and we motivate and apply our method using the case of Romania. The approach can be easily extended to other countries and natural hazards.  相似文献   

19.
Traditionally, microbial risk assessors have used point estimates to evaluate the probability that an individual will become infected. We developed a quantitative approach that shifts the risk characterization perspective from point estimate to distributional estimate, and from individual to population. To this end, we first designed and implemented a dynamic model that tracks traditional epidemiological variables such as the number of susceptible, infected, diseased, and immune, and environmental variables such as pathogen density. Second, we used a simulation methodology that explicitly acknowledges the uncertainty and variability associated with the data. Specifically, the approach consists of assigning probability distributions to each parameter, sampling from these distributions for Monte Carlo simulations, and using a binary classification to assess the output of each simulation. A case study is presented that explores the uncertainties in assessing the risk of giardiasis when swimming in a recreational impoundment using reclaimed water. Using literature-based information to assign parameters ranges, our analysis demonstrated that the parameter describing the shedding of pathogens by infected swimmers was the factor that contributed most to the uncertainty in risk. The importance of other parameters was dependent on reducing the a priori range of this shedding parameter. By constraining the shedding parameter to its lower subrange, treatment efficiency was the parameter most important in predicting whether a simulation resulted in prevalences above or below non outbreak levels. Whereas parameters associated with human exposure were important when the shedding parameter was constrained to a higher subrange. This Monte Carlo simulation technique identified conditions in which outbreaks and/or nonoutbreaks are likely and identified the parameters that most contributed to the uncertainty associated with a risk prediction.  相似文献   

20.
This study is an econometric systems approach to modeling the factors and linkages affecting risk perceptions toward agricultural biotechnology, self-protection actions, and food demand. This model is applied to milk in the United States, but it can be adapted to other products as well as other categories of risk perceptions. The contribution of this formulation is the ability to examine how explanatory factors influence risk perceptions and whether they translate into behavior and ultimately what impact this has on aggregate markets. Hadden's outrage factors on heightening risk perceptions are among the factors examined. In particular, the article examines the role of labeling as a means of permitting informed consent to mitigate outrage factors. The effects of attitudinal, economic, and demographic factors on risk perceptions are also explored, as well as the linkage between risk perceptions, consumer behavior, and food demand. Because risk perceptions and self-protection actions are categorical variables and demand is a continuous variable, the model is estimated as a two-stage mixed system with a covariance correction procedure suggested by Amemiya. The findings indicate that it is the availability of labeling, not the price difference, between that labeled milk and milk produced with recombinant bovine Somatotropin (rbST) that significantly affects consumer's selection of rbST-free milk. The results indicate that greater availability of labeled milk would not only significantly increase the proportion of consumers who purchased labeled milk, its availability would also reduce the perception of risk associated with rbST, whether consumers purchase it or not. In other words, availability of rbST-free milk translates into lower risk perceptions toward milk produced with rbST.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号