首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Abstract.  We introduce a fully parametric approach for updating beliefs regarding correlated binary variables, after marginal probability assessments based on information of varying quality are provided by an expert. This approach allows for the calculation of a predictive joint density for future assessments. The proposed methodology offers new insight into the parameters that control the dependence of the binary variables, and the relation of these parameters to the joint density of the probability assessments. A comprehensible elicitation procedure for the model parameters is put forward. The approach taken is motivated and illustrated through a practical application.  相似文献   

2.
A model company     
HUGIN Expert is a small company writing software that can be used to create expert systems, using probability in the guise of graphical models. Steffen Lauritzen describes his part in the genesis and development of the company.  相似文献   

3.
Simulation and extremal analysis of hurricane events   总被引:3,自引:0,他引:3  
In regions affected by tropical storms the damage caused by hurricane winds can be catastrophic. Consequently, accurate estimates of hurricane activity in such regions are vital. Unfortunately, the severity of events means that wind speed data are scarce and unreliable, even by standards which are usual for extreme value analysis. In contrast, records of atmospheric pressures are more complete. This suggests a two-stage approach: the development of a model describing spatiotemporal patterns of wind field behaviour for hurricane events; then the simulation of such events, using meteorological climate models, to obtain a realization of associated wind speeds whose extremal characteristics are summarized. This is not a new idea, but we apply careful statistical modelling for each aspect of the model development and simulation, taking the Gulf and Atlantic coastlines of the USA as our study area. Moreover, we address for the first time the issue of spatial dependence in extremes of hurricane events, which we find to have substantial implications for regional risk assessments.  相似文献   

4.
Summary.  Systematic review and synthesis (meta-analysis) methods are now increasingly used in many areas of health care research. We investigate the potential usefulness of these methods for combining human and animal data in human health risk assessment of exposure to environmental chemicals. Currently, risk assessments are often based on narrative review and expert judgment, but systematic review and formal synthesis methods offer a more transparent and rigorous approach. The method is illustrated by using the example of trihalomethane exposure and its possible association with low birth weight. A systematic literature review identified 13 relevant studies (five epidemiological and eight toxicological). Study-specific dose–response slope estimates were obtained for each of the studies and synthesized by using Bayesian meta-analysis models. Sensitivity analyses of the results obtained to the assumptions made suggest that some assumptions are critical. It is concluded that systematic review methods should be used in the synthesis of evidence for environmental standard setting, that meta-analysis will often be a valuable approach in these contexts and that sensitivity analyses are an important component of the approach whether or not formal synthesis methods (such as systematic review and meta-analysis) are used.  相似文献   

5.
We consider the problem of updating beliefs for binary random variables, when probability assessments are elicited for them based on information of varying quality. We propose the threshold model, a Bayesian updating procedure where only measures of location and correlation have to be specified before any updating is possible. The main aspect of this model is the use of Jeffrey's conditionalization. According to this rule, it is not necessary to model the assessments and how they relate to the quantities of interest in a fully parametric way. This paper is motivated by the practical issue where a large company needs to manage its assets and future expenditure.  相似文献   

6.

We study models for recurrent events with special emphasis on the situation where a terminal event acts as a competing risk for the recurrent events process and where there may be gaps between periods during which subjects are at risk for the recurrent event. We focus on marginal analysis of the expected number of events and show that an Aalen–Johansen type estimator proposed by Cook and Lawless is applicable in this situation. A motivating example deals with psychiatric hospital admissions where we supplement with analyses of the marginal distribution of time to the competing event and the marginal distribution of the time spent in hospital. Pseudo-observations are used for the latter purpose.

  相似文献   

7.
Elicitation methods are proposed for quantifying expert opinion about a multivariate normal sampling model. The natural conjugate prior family imposes a relationship between the mean vector and the covariance matrix that can portray an expert's opinion poorly. Instead we assume that opinions about the mean and the covariance are independent and suggest innovative forms of question which enable the expert to quantify separately his or her opinion about each of these parameters. Prior opinion about the mean vector is modelled by a multivariate normal distribution and about the covariance matrix by both an inverse Wishart distribution and a generalized inverse-Wishart (GIW) distribution. To construct the latter, results are developed that give insight into the GIW parameters and their interrelationships. Certain of the elicitation methods exploit unconditional assessments as fully as possible, since these can reflect an expert's beliefs more accurately than conditional assessments. Methods are illustrated through an example.  相似文献   

8.
A stratified analysis of the differences in proportions has been widely employed in epidemiological research, social sciences, and drug development. It provides a useful framework for combining data across strata to produce a common effect. However, for rare events with incidence rates close to zero, popular confidence intervals for risk differences in a stratified analysis may not have appropriate coverage probabilities that approach the nominal confidence levels and the algorithms may fail to produce a valid confidence interval because of zero events in both the arms of a stratum. The main objective of this study is to evaluate the performance of certain methods commonly employed to construct confidence intervals for stratified risk differences when the response probabilities are close to a boundary value of zero or one. Additionally, we propose an improved stratified Miettinen–Nurminen confidence interval that exhibits a superior performance over standard methods while avoiding computational difficulties involving rare events. The proposed method can also be employed when the response probabilities are close to one.  相似文献   

9.
 内容提要:一系列国内外证券公司(投资银行)的失败危机表明,及早有效地对证券公司(投资银行)的失败进行预警极为重要。本文在借鉴国内外建立企业失败预警模型的理论和经验的基础上,以我国证券公司为研究对象,将证券公司财务失败界定为证券公司破产或被证券监管部门采取风险处置措施,选取了24家财务失败证券公司和24家财务健康证券公司为样本,有针对性地选取和设计了一系列指标,对比应用了Logit方法、Probit方法和判别分析方法,最终选用Logit方法成功建立了证券公司失败预警模型。  相似文献   

10.
杨旭  聂磊 《统计研究》2008,25(9):32-35
再保险人的整体风险管理能力、水平和行为将直接影响再保险公司的整体风险管理绩效以及整个保险市场的稳定。本文使用极值理论模拟了再保险业务的风险分布特征,比较了成数再保险和非比例再保险业务风险分布的差异,认为再保险业务损失分布不服从正态分布,具有厚尾性;成数业务损失分布具有均值大、方差小的特点,而非比例业务损失分布的均值较小,但方差较大;在高置信水平条件下,非比例业务的风险损失率远远大于成数业务。因此,再保险公司应当大力发展非比例再保险业务,并增强资本实力,积极拓展业范围,在国际市场上分散风险。  相似文献   

11.
A model for the lifetime of a system is considered in which the system is susceptible to simultaneous failures of two or more components, the failures having a common external cause. Three sets of discrete failure data from the US nuclear industry are examined to motivate and illustrate the model derivation: they are for motor-operated valves, cooling fans and emergency diesel generators. To achieve target reliabilities, these components must be placed in systems that have built-in redundancy. Consequently, multiple failures due to a common cause are critical in the risk of core meltdown. Vesely has offered a simple methodology for inference, called the binomial failure rate model: external events are assumed to be governed by a Poisson shock model in which resulting shocks kill X out of m system components, X having a binomial distribution with parameters ( m , p ), 0< p <1. In many applications the binomial failure rate model fits failure data poorly, and the model has not typically been applied to probabilistic risk assessments in the nuclear industry. We introduce a realistic generalization of the binomial failure rate model by assigning a mixing distribution to the unknown parameter p . The distribution is generally identifiable, and its unique nonparametric maximum likelihood estimator can be obtained by using a simple iterative scheme.  相似文献   

12.
This work is based on the construction and use of a SLAM II PC-based simulator for a medium-sized company manufacturing machine tools designed to work glass. Response surface methods and several experimental design techniques were applied to this company to optimize simulator performances. Since there is a feedback loop on the actual manufacturing system, the theory of orthogonal polynomials was applied to the case-study relative to a planned experimental configuration of non-equispaced tests performed on the real system, which were based on analysis characteristics. This discussion is the ideal extension of the basic case involving equispaced configurations. A summary is presented of the results obtained from the application of a 3X4 mixed level factorial experiment implemented with a discrete and stochastic industrial simulator.  相似文献   

13.
Event counts are response variables with non-negative integer values representing the number of times that an event occurs within a fixed domain such as a time interval, a geographical area or a cell of a contingency table. Analysis of counts by Gaussian regression models ignores the discreteness, asymmetry and heteroscedasticity and is inefficient, providing unrealistic standard errors or possibly negative predictions of the expected number of events. The Poisson regression is the standard model for count data with underlying assumptions on the generating process which may be implausible in many applications. Statisticians have long recognized the limitation of imposing equidispersion under the Poisson regression model. A typical situation is when the conditional variance exceeds the conditional mean, in which case models allowing for overdispersion are routinely used. Less reported is the case of underdispersion with fewer modeling alternatives and assessments available in the literature. One of such alternatives, the Gamma-count model, is adopted here in the analysis of an agronomic experiment designed to investigate the effect of levels of defoliation on different phenological states upon the number of cotton bolls. Data set and code for analysis are available as online supplements. Results show improvements over the Poisson model and the semi-parametric quasi-Poisson model in capturing the observed variability in the data. Estimating rather than assuming the underlying variance process leads to important insights into the process.  相似文献   

14.
This paper investigates ruin probability and ruin time of a two-dimensional fractional Brownian motion risk process. The net loss process of an insurance company is modeled by a fractional Brownian motion. The two-dimensional fractional Brownian motion risk process models the surplus processes of an insurance and a reinsurance company, where the net loss is divided between them in some specified proportions. The ruin problem considered is that of the two-dimensional risk process first entering the negative quadrant, that is, the simultaneous ruin problem. We derive both asymptotics of the ruin probability and approximations of the scaled conditional ruin time as the initial capital tends to infinity.  相似文献   

15.
In the expert use problem, hierarchical models provide an ideal perspective for classifying understanding and generalising the aggregative algoithms suitable to compose experts' opinions in a single synthesis distribution. After suggesting to look at Peter A. Morris' (1971, 1974, 1977) Bayesian model in such a light, this paper addresses the problem of modelling the multidimensional ‘performance function’, which encodes aggregator's beliefs about each expert's assessment ability and the degree of dependence among the experts. Whenever the aggregator has not an empirically founded probability distribution for the experts' performances, the proposed fiducial procedure provides a rational and very flexible tool for enabling the performance function to be specified with a relatively small number of assessments: moreover, it warrants aggregator's beliefs about the experts in terms of personal long run frequencies.  相似文献   

16.
An elicitation method is proposed for quantifying subjective opinion about the regression coefficients of a generalized linear model. Opinion between a continuous predictor variable and the dependent variable is modelled by a piecewise-linear function, giving a flexible model that can represent a wide variety of opinion. To quantify his or her opinions, the expert uses an interactive computer program, performing assessment tasks that involve drawing graphs and bar-charts to specify medians and other quantiles. Opinion about the regression coefficients is represented by a multivariate normal distribution whose parameters are determined from the assessments. It is practical to use the procedure with models containing a large number of parameters. This is illustrated through practical examples and the benefit from using prior knowledge is examined through cross-validation.  相似文献   

17.
Causal probabilistic models have been suggested for representing diagnostic knowledge in expert systems. This paper describes the theoretical basis for and the implementation of an expert system based on causal probabilistic networks. The system includes model search for building the knowledge base, a shell for making the knowledge base available for users in consultation sessions, and a user interface. The system contains facilities for storing knowledge and propagating new knowledge, and mechanisms for building the knowledge base by semi-automated analysis of a large sparse contingency table. The contingency table contains data acquired for patients in the same diagnostic category as the intended application area of the expert system. The knowledge base of the expert system is created by combining expert knowledge and a statistical model search in a model conversion scheme based on a theory developed by Lauritzen & Spiegelhalter and using exact tests as suggested by Kreiner. The system is implemented on a PC and has been used to simulate the diagnostic value of additional clinical information for coronary artery disease patients under consideration for being referred to coronary arteriography.  相似文献   

18.
Default is a rare event, even in segments in the midrange of a bank’s portfolio. Inference about default rates is essential for risk management and for compliance with the requirements of Basel II. Most commercial loans are in the middle-risk categories and are to unrated companies. Expert information is crucial in inference about defaults. A Bayesian approach is proposed and illustrated using a prior distribution assessed from an industry expert. The binomial model, most common in applications, is extended to allow correlated defaults. A check of robustness is illustrated with an ε-mixture of priors.  相似文献   

19.
偿付能力是衡量一个公司经济能力的综合指标,而偿付能力监管是保险监管的首要目标,也是保险公司风险管理的主要内容。依据中国保监会对寿险公司偿付能力监管的要求,选取了九个财务指标,运用主成分分析法对18家活跃在中国保险市场的寿险公司的偿付能力进行了综合评价,提出应当拓宽保险资金运用渠道,优化管理结构,引进优秀人才,改进偿付能力评估办法,从而有效地提高中国寿险公司偿付能力。  相似文献   

20.
A substantial degree of uncertainty exists surrounding the reconstruction of events based on memory recall. This form of measurement error affects the performance of structured interviews such as the Composite International Diagnostic Interview (CIDI), an important tool to assess mental health in the community. Measurement error probably explains the discrepancy in estimates between longitudinal studies with repeated assessments (the gold-standard), yielding approximately constant rates of depression, versus cross-sectional studies which often find increasing rates closer in time to the interview. Repeated assessments of current status (or recent history) are more reliable than reconstruction of a person's psychiatric history based on a single interview. In this paper, we demonstrate a method of estimating a time-varying measurement error distribution in the age of onset of an initial depressive episode, as diagnosed by the CIDI, based on an assumption regarding age-specific incidence rates. High-dimensional non-parametric estimation is achieved by the EM-algorithm with smoothing. The method is applied to data from a Norwegian mental health survey in 2000. The measurement error distribution changes dramatically from 1980 to 2000, with increasing variance and greater bias further away in time from the interview. Some influence of the measurement error on already published results is found.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号