首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
Joye Gordon 《Risk analysis》2003,23(6):1287-1296
Foodborne illness represents a serious health hazard in the United States. Since foodborne illness can often be prevented by an individual's behavior, messages aimed at promoting safe food-handling behaviors should be a major tool to reduce the incidence of foodborne illness. This article argues that to achieve adoption of safe food-handling practices in the home, food-safety messages should both stimulate risk perceptions and promote self-efficacy, feelings that one can successfully enact recommended behaviors. A content analysis of nationally distributed food-safety messages questioned if messages incorporated these features. Since food-safety communicators operate in complex environments with multiple and sometimes competing objectives, this study also questioned if sponsorship of foodborne illness prevention messages was related to the amount of content designed to alter risk perceptions associated with foodborne illness. Results of the quantitative content analysis found that copywriters generally included content designed to stimulate risk perception about foodborne illness but virtually ignored self-efficacy needs of the audience. A marked difference in tendencies to stimulate risk perceptions was found based on sponsorship. Both in volume and proportion, results show that governmentally sponsored messages more aggressively attempted to heighten risk perceptions associated with foodborne illness than did messages sponsored by privately funded communicators.  相似文献   

2.
Various methods exist to calculate confidence intervals for the benchmark dose in risk analysis. This study compares the performance of three such methods in fitting nonlinear dose-response models: the delta method, the likelihood-ratio method, and the bootstrap method. A data set from a developmental toxicity test with continuous, ordinal, and quantal dose-response data is used for the comparison of these methods. Nonlinear dose-response models, with various shapes, were fitted to these data. The results indicate that a few thousand runs are generally needed to get stable confidence limits when using the bootstrap method. Further, the bootstrap and the likelihood-ratio method were found to give fairly similar results. The delta method, however, resulted in some cases in different (usually narrower) intervals, and appears unreliable for nonlinear dose-response models. Since the bootstrap method is more time consuming than the likelihood-ratio method, the latter is more attractive for routine dose-response analysis. In the context of a probabilistic risk assessment the bootstrap method has the advantage that it directly links to Monte Carlo analysis.  相似文献   

3.
This guest editorial is a summary of the NCSU/USDA Workshop on Sensitivity Analysis held June 11–12, 2001 at North Carolina State University and sponsored by the U.S. Department of Agriculture's Office of Risk Assessment and Cost Benefit Analysis. The objective of the workshop was to learn across disciplines in identifying, evaluating, and recommending sensitivity analysis methods and practices for application to food‐safety process risk models. The workshop included presentations regarding the Hazard Assessment and Critical Control Points (HACCP) framework used in food‐safety risk assessment, a survey of sensitivity analysis methods, invited white papers on sensitivity analysis, and invited case studies regarding risk assessment of microbial pathogens in food. Based on the sharing of interdisciplinary information represented by the presentations, the workshop participants, divided into breakout sessions, responded to three trigger questions: What are the key criteria for sensitivity analysis methods applied to food‐safety risk assessment? What sensitivity analysis methods are most promising for application to food safety and risk assessment? and What are the key needs for implementation and demonstration of such methods? The workshop produced agreement regarding key criteria for sensitivity analysis methods and the need to use two or more methods to try to obtain robust insights. Recommendations were made regarding a guideline document to assist practitioners in selecting, applying, interpreting, and reporting the results of sensitivity analysis.  相似文献   

4.
In risk analysis problems, the decision‐making process is supported by the utilization of quantitative models. Assessing the relevance of interactions is an essential information in the interpretation of model results. By such knowledge, analysts and decisionmakers are able to understand whether risk is apportioned by individual factor contributions or by their joint action. However, models are oftentimes large, requiring a high number of input parameters, and complex, with individual model runs being time consuming. Computational complexity leads analysts to utilize one‐parameter‐at‐a‐time sensitivity methods, which prevent one from assessing interactions. In this work, we illustrate a methodology to quantify interactions in probabilistic safety assessment (PSA) models by varying one parameter at a time. The method is based on a property of the functional ANOVA decomposition of a finite change that allows to exactly determine the relevance of factors when considered individually or together with their interactions with all other factors. A set of test cases illustrates the technique. We apply the methodology to the analysis of the core damage frequency of the large loss of coolant accident of a nuclear reactor. Numerical results reveal the nonadditive model structure, allow to quantify the relevance of interactions, and to identify the direction of change (increase or decrease in risk) implied by individual factor variations and by their cooperation.  相似文献   

5.
This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.  相似文献   

6.
Physiologically‐based pharmacokinetic (PBPK) models are often submitted to or selected by agencies, such as the U.S. Environmental Protection Agency (U.S. EPA) and Agency for Toxic Substances and Disease Registry, for consideration for application in human health risk assessment (HHRA). Recently, U.S. EPA evaluated the human PBPK models for perchlorate and radioiodide for their ability to estimate the relative sensitivity of perchlorate inhibition on thyroidal radioiodide uptake for various population groups and lifestages. The most well‐defined mode of action of the environmental contaminant, perchlorate, is competitive inhibition of thyroidal iodide uptake by the sodium‐iodide symporter (NIS). In this analysis, a six‐step framework for PBPK model evaluation was followed, and with a few modifications, the models were determined to be suitable for use in HHRA to evaluate relative sensitivity among human lifestages. Relative sensitivity to perchlorate was determined by comparing the PBPK model predicted percent inhibition of thyroidal radioactive iodide uptake (RAIU) by perchlorate for different lifestages. A limited sensitivity analysis indicated that model parameters describing urinary excretion of perchlorate and iodide were particularly important in prediction of RAIU inhibition; therefore, a range of biologically plausible values available in the peer‐reviewed literature was evaluated. Using the updated PBPK models, the greatest sensitivity to RAIU inhibition was predicted to be the near‐term fetus (gestation week 40) compared to the average adult and other lifestages; however, when exposure factors were taken into account, newborns were found to be populations that need further evaluation and consideration in a risk assessment for perchlorate.  相似文献   

7.
《Risk analysis》2018,38(9):1988-2009
Harbor seals in Iliamna Lake, Alaska, are a small, isolated population, and one of only two freshwater populations of harbor seals in the world, yet little is known about their abundance or risk for extinction. Bayesian hierarchical models were used to estimate abundance and trend of this population. Observational models were developed from aerial survey and harvest data, and they included effects for time of year and time of day on survey counts. Underlying models of abundance and trend were based on a Leslie matrix model that used prior information on vital rates from the literature. We developed three scenarios for variability in the priors and used them as part of a sensitivity analysis. The models were fitted using Markov chain Monte Carlo methods. The population production rate implied by the vital rate estimates was about 5% per year, very similar to the average annual harvest rate. After a period of growth in the 1980s, the population appears to be relatively stable at around 400 individuals. A population viability analysis assessing the risk of quasi‐extinction, defined as any reduction to 50 animals or below in the next 100 years, ranged from 1% to 3%, depending on the prior scenario. Although this is moderately low risk, it does not include genetic or catastrophic environmental events, which may have occurred to the population in the past, so our results should be applied cautiously.  相似文献   

8.
Sensitivity analysis (SA) methods are a valuable tool for identifying critical control points (CCPs), which is one of the important steps in the hazard analysis and CCP approach that is used to ensure safe food. There are many SA methods used across various disciplines. Furthermore, food safety process risk models pose challenges because they often are highly nonlinear, contain thresholds, and have discrete inputs. Therefore, it is useful to compare and evaluate SA methods based upon applications to an example food safety risk model. Ten SA methods were applied to a draft Vibrio parahaemolyticus (Vp) risk assessment model developed by the Food and Drug Administration. The model was modified so that all inputs were independent. Rankings of key inputs from different methods were compared. Inputs such as water temperature, number of oysters per meal, and the distributional assumption for the unrefrigerated time were the most important inputs, whereas time on water, fraction of pathogenic Vp, and the distributional assumption for the weight of oysters were the least important inputs. Most of the methods gave a similar ranking of key inputs even though the methods differed in terms of being graphical, mathematical, or statistical, accounting for individual effects or joint effect of inputs, and being model dependent or model independent. A key recommendation is that methods be further compared by application on different and more complex food safety models. Model independent methods, such as ANOVA, mutual information index, and scatter plots, are expected to be more robust than others evaluated.  相似文献   

9.
Standard statistical methods understate the uncertainty one should attach to effect estimates obtained from observational data. Among the methods used to address this problem are sensitivity analysis, Monte Carlo risk analysis (MCRA), and Bayesian uncertainty assessment. Estimates from MCRAs have been presented as if they were valid frequentist or Bayesian results, but examples show that they need not be either in actual applications. It is concluded that both sensitivity analyses and MCRA should begin with the same type of prior specification effort as Bayesian analysis.  相似文献   

10.
关于信用风险评价问题至今已经做了很多研究,各种信用评价模型与方法也已被开发.但是这些模型与方法几乎都是基于财务数据、股票价格或风险调研机构发表的各种调查结果.因为几乎所有的中小企业的财务数据都是非公开的,至今开发的信用评价模型与方法都不免成为无米之炊.为此,本文提出了一个新的途径,只需要根据销售额、顾客付款额、拖欠款额等日常业务处理数据来评价顾客企业的信用度.本文提出一个应用Bgging方法评价顾客信用的系统,其目的在于解决由于异常顾客数比正常顾客要少很多而带来的问题,提高分辨异常顾客的能力.本文所提出的信用评价系统将应用到一个实际企业的信用评价问题中,借此来验证系统的性能和效果.  相似文献   

11.
There is increasing interest in the integration of quantitative risk analysis with benefit-cost and cost-effectiveness methods to evaluate environmental health policy making and perform comparative analyses. However, the combined use of these methods has revealed deficiencies in the available methods, and the lack of useful analytical frameworks currently constrains the utility of comparative risk and policy analyses. A principal issue in integrating risk and economic analysis is the lack of common performance metrics, particularly when conducting comparative analyses of regulations with disparate health endpoints (e.g., cancer and noncancer effects or risk-benefit analysis) and quantitative estimation of cumulative risk, whether from exposure to single agents with multiple health impacts or from exposure to mixtures. We propose a general quantitative framework and examine assumptions required for performing analyses of health risks and policies. We review existing and proposed risk and health-impact metrics for evaluating policies designed to protect public health from environmental exposures, and identify their strengths and weaknesses with respect to their use in a general comparative risk and policy analysis framework. Case studies are presented to demonstrate applications of this framework with risk-benefit and air pollution risk analyses. Through this analysis, we hope to generate discussions regarding the data requirements, analytical approaches, and assumptions required for general models to be used in comparative risk and policy analysis.  相似文献   

12.
Methods of engineering risk analysis are based on a functional analysis of systems and on the probabilities (generally Bayesian) of the events and random variables that affect their performances. These methods allow identification of a system's failure modes, computation of its probability of failure or performance deterioration per time unit or operation, and of the contribution of each component to the probabilities and consequences of failures. The model has been extended to include the human decisions and actions that affect components' performances, and the management factors that affect behaviors and can thus be root causes of system failures. By computing the risk with and without proposed measures, one can then set priorities among different risk management options under resource constraints. In this article, I present briefly the engineering risk analysis method, then several illustrations of risk computations that can be used to identify a system's weaknesses and the most cost-effective way to fix them. The first example concerns the heat shield of the space shuttle orbiter and shows the relative risk contribution of the tiles in different areas of the orbiter's surface. The second application is to patient risk in anesthesia and demonstrates how the engineering risk analysis method can be used in the medical domain to rank the benefits of risk mitigation measures, in that case, mostly organizational. The third application is a model of seismic risk analysis and mitigation, with application to the San Francisco Bay area for the assessment of the costs and benefits of different seismic provisions of building codes. In all three cases, some aspects of the results were not intuitively obvious. The probabilistic risk analysis (PRA) method allowed identifying system weaknesses and the most cost-effective way to fix them.  相似文献   

13.
Trade of animals and animal products imposes an uncertain and variable risk for exotic animal diseases introduction into importing countries. Risk analysis provides importing countries with an objective, transparent, and internationally accepted method for assessing that risk. Over the last decades, European Union countries have conducted probabilistic risk assessments quite frequently to quantify the risk for rare animal diseases introduction into their territories. Most probabilistic animal health risk assessments have been typically classified into one-level and multilevel binomial models. One-level models are more simple than multilevel models because they assume that animals or products originate from one single population. However, it is unknown whether such simplification may result in substantially different results compared to those obtained through the use of multilevel models. Here, data used on a probabilistic multilevel binomial model formulated to assess the risk for highly pathogenic avian influenza introduction into Spain were reanalyzed using a one-level binomial model and their outcomes were compared. An alternative ordinal model is also proposed here, which makes use of simpler assumptions and less information compared to those required by traditional one-level and multilevel approaches. Results suggest that, at least under certain circumstances, results of the one-level and ordinal approaches are similar to those obtained using multilevel models. Consequently, we argue that, when data are insufficient to run traditional probabilistic models, the ordinal approach presented here may be a suitable alternative to rank exporting countries in terms of the risk that they impose for the spread of rare animal diseases into disease-free countries.  相似文献   

14.
In this work, we introduce a generalized rationale for local sensitivity analysis (SA) methods that allows to solve the problems connected with input constraints. Several models in use in the risk analysis field are characterized by the presence of deterministic relationships among the input parameters. However, SA issues related to the presence of constraints have been mainly dealt with in a heuristic fashion. We start with a systematic analysis of the effects of constraints. The findings can be summarized in the following three effects. (i) Constraints make it impossible to vary one parameter while keeping all others fixed. (ii) The model output becomes insensitive to a parameter if a constraint is solved for that parameter. (iii) Sensitivity analysis results depend on which parameter is selected as dependent. The explanation of these effects is found by proposing a result that leads to a natural extension of the local SA rationale introduced in Helton (1993) . We then extend the definitions of the Birnbaum, criticality, and the differential importance measures to the constrained case. In addition, a procedure is introduced that allows to obtain constrained sensitivity results at the same cost as in the absence of constraints. The application to a nonbinary event tree concludes the article, providing a numerical illustration of the above findings.  相似文献   

15.
Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer.  相似文献   

16.
Because of the inherent complexity of biological systems, there is often a choice between a number of apparently equally applicable physiologically based models to describe uptake and metabolism processes in toxicology or risk assessment. These models may fit the particular data sets of interest equally well, but may give quite different parameter estimates or predictions under different (extrapolated) conditions. Such competing models can be discriminated by a number of methods, including potential refutation by means of strategic experiments, and their ability to suitably incorporate all relevant physiological processes. For illustration, three currently used models for steady-state hepatic elimination--the venous equilibration model, the parallel tube model, and the distributed sinusoidal perfusion model--are reviewed and compared with particular reference to their application in the area of risk assessment. The ability of each of the models to describe and incorporate such physiological processes as protein binding, precursor-metabolite relations and hepatic zones of elimination, capillary recruitment, capillary heterogeneity, and intrahepatic shunting is discussed. Differences between the models in hepatic parameter estimation, extrapolation to different conditions, and interspecies scaling are discussed, and criteria for choosing one model over the others are presented. In this case, the distributed model provides the most general framework for describing physiological processes taking place in the liver, and has so far not been experimentally refuted, as have the other two models. These simpler models may, however, provide useful bounds on parameter estimates and on extrapolations and risk assessments.  相似文献   

17.
Landslide Risk Models for Decision Making   总被引:1,自引:0,他引:1  
This contribution presents a quantitative procedure for landslide risk analysis and zoning considering hazard, exposure (or value of elements at risk), and vulnerability. The method provides the means to obtain landslide risk models (expressing expected damage due to landslides on material elements and economic activities in monetary terms, according to different scenarios and periods) useful to identify areas where mitigation efforts will be most cost effective. It allows identifying priority areas for the implementation of actions to reduce vulnerability (elements) or hazard (processes). The procedure proposed can also be used as a preventive tool, through its application to strategic environmental impact analysis (SEIA) of land-use plans. The underlying hypothesis is that reliable predictions about hazard and risk can be made using models based on a detailed analysis of past landslide occurrences in connection with conditioning factors and data on past damage. The results show that the approach proposed and the hypothesis formulated are essentially correct, providing estimates of the order of magnitude of expected losses for a given time period. Uncertainties, strengths, and shortcomings of the procedure and results obtained are discussed and potential lines of research to improve the models are indicated. Finally, comments and suggestions are provided to generalize this type of analysis.  相似文献   

18.
Rios J  Rios Insua D 《Risk analysis》2012,32(5):894-915
Recent large-scale terrorist attacks have raised interest in models for resource allocation against terrorist threats. The unifying theme in this area is the need to develop methods for the analysis of allocation decisions when risks stem from the intentional actions of intelligent adversaries. Most approaches to these problems have a game-theoretic flavor although there are also several interesting decision-analytic-based proposals. One of them is the recently introduced framework for adversarial risk analysis, which deals with decision-making problems that involve intelligent opponents and uncertain outcomes. We explore how adversarial risk analysis addresses some standard counterterrorism models: simultaneous defend-attack models, sequential defend-attack-defend models, and sequential defend-attack models with private information. For each model, we first assess critically what would be a typical game-theoretic approach and then provide the corresponding solution proposed by the adversarial risk analysis framework, emphasizing how to coherently assess a predictive probability model of the adversary's actions, in a context in which we aim at supporting decisions of a defender versus an attacker. This illustrates the application of adversarial risk analysis to basic counterterrorism models that may be used as basic building blocks for more complex risk analysis of counterterrorism problems.  相似文献   

19.
20.
A challenge for large‐scale environmental health investigations such as the National Children's Study (NCS), is characterizing exposures to multiple, co‐occurring chemical agents with varying spatiotemporal concentrations and consequences modulated by biochemical, physiological, behavioral, socioeconomic, and environmental factors. Such investigations can benefit from systematic retrieval, analysis, and integration of diverse extant information on both contaminant patterns and exposure‐relevant factors. This requires development, evaluation, and deployment of informatics methods that support flexible access and analysis of multiattribute data across multiple spatiotemporal scales. A new “Tiered Exposure Ranking” (TiER) framework, developed to support various aspects of risk‐relevant exposure characterization, is described here, with examples demonstrating its application to the NCS. TiER utilizes advances in informatics computational methods, extant database content and availability, and integrative environmental/exposure/biological modeling to support both “discovery‐driven” and “hypothesis‐driven” analyses. “Tier 1” applications focus on “exposomic” pattern recognition for extracting information from multidimensional data sets, whereas second and higher tier applications utilize mechanistic models to develop risk‐relevant exposure metrics for populations and individuals. In this article, “tier 1” applications of TiER explore identification of potentially causative associations among risk factors, for prioritizing further studies, by considering publicly available demographic/socioeconomic, behavioral, and environmental data in relation to two health endpoints (preterm birth and low birth weight). A “tier 2” application develops estimates of pollutant mixture inhalation exposure indices for NCS counties, formulated to support risk characterization for these endpoints. Applications of TiER demonstrate the feasibility of developing risk‐relevant exposure characterizations for pollutants using extant environmental and demographic/socioeconomic data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号