首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article studies the effects of incorporating the interdependence among London small business defaults into a risk analysis framework using the data just before the financial crisis. We propose an extension from standard scoring models to take into account the spatial dimensions and the demographic characteristics of small and medium‐sized enterprises (SMEs), such as legal form, industry sector, and number of employees. We estimate spatial probit models using different distance matrices based only on the spatial location or on an interaction between spatial locations and demographic characteristics. We find that the interdependence or contagion component defined on spatial and demographic characteristics is significant and that it improves the ability to predict defaults of non–start‐ups in London. Furthermore, including contagion effects among SMEs alters the parameter estimates of risk determinants. The approach can be extended to other risk analysis applications where spatial risk may incorporate correlation based on other aspects.  相似文献   

2.
Experimental animal studies often serve as the basis for predicting risk of adverse responses in humans exposed to occupational hazards. A statistical model is applied to exposure-response data and this fitted model may be used to obtain estimates of the exposure associated with a specified level of adverse response. Unfortunately, a number of different statistical models are candidates for fitting the data and may result in wide ranging estimates of risk. Bayesian model averaging (BMA) offers a strategy for addressing uncertainty in the selection of statistical models when generating risk estimates. This strategy is illustrated with two examples: applying the multistage model to cancer responses and a second example where different quantal models are fit to kidney lesion data. BMA provides excess risk estimates or benchmark dose estimates that reflects model uncertainty.  相似文献   

3.
4.
Multistage clonal growth models are of interest for cancer risk assessment because they can explicitly incorporate data on cell replication. Both approximate and exact formulations of the two stage growth model have been described. The exact solution considers the conditional probability of tumors arising in previously tumor-free animals; the approximate solution estimates total probability of tumor formation. The exact solution is much more computationally intensive when time-dependent cell growth parameters are included. The approximate solution deviates from the exact solution at high incidences and probabilities of tumor. This report describes a computationally tractable,'improved approximation'to the exact solution. Our improved approximation includes a correction term to adjust the unconditional expectation of intermediate cells based on the time history of formation of intermediate cells by mutation of normal cells (recruitment) or by cell division in the intermediate cell population (expansion). The improved approximation provided a much better match to the exact solution than the approximate solution for a wide range of parameter values. The correction term also appears to provide insight into the biological factors that contribute to the variance of the expectation for the number of intermediate cells over time.  相似文献   

5.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

6.
The alleviation of food-borne diseases caused by microbial pathogen remains a great concern in order to ensure the well-being of the general public. The relation between the ingested dose of organisms and the associated infection risk can be studied using dose-response models. Traditionally, a model selected according to a goodness-of-fit criterion has been used for making inferences. In this article, we propose a modified set of fractional polynomials as competitive dose-response models in risk assessment. The article not only shows instances where it is not obvious to single out one best model but also illustrates that model averaging can best circumvent this dilemma. The set of candidate models is chosen based on biological plausibility and rationale and the risk at a dose common to all these models estimated using the selected models and by averaging over all models using Akaike's weights. In addition to including parameter estimation inaccuracy, like in the case of a single selected model, model averaging accounts for the uncertainty arising from other competitive models. This leads to a better and more honest estimation of standard errors and construction of confidence intervals for risk estimates. The approach is illustrated for risk estimation at low dose levels based on Salmonella typhi and Campylobacter jejuni data sets in humans. Simulation studies indicate that model averaging has reduced bias, better precision, and also attains coverage probabilities that are closer to the 95% nominal level compared to best-fitting models according to Akaike information criterion.  相似文献   

7.
In a series of articles and a health-risk assessment report, scientists at the CIIT Hamner Institutes developed a model (CIIT model) for estimating respiratory cancer risk due to inhaled formaldehyde within a conceptual framework incorporating extensive mechanistic information and advanced computational methods at the toxicokinetic and toxicodynamic levels. Several regulatory bodies have utilized predictions from this model; on the other hand, upon detailed evaluation the California EPA has decided against doing so. In this article, we study the CIIT model to identify key biological and statistical uncertainties that need careful evaluation if such two-stage clonal expansion models are to be used for extrapolation of cancer risk from animal bioassays to human exposure. Broadly, these issues pertain to the use and interpretation of experimental labeling index and tumor data, the evaluation and biological interpretation of estimated parameters, and uncertainties in model specification, in particular that of initiated cells. We also identify key uncertainties in the scale-up of the CIIT model to humans, focusing on assumptions underlying model parameters for cell replication rates and formaldehyde-induced mutation. We discuss uncertainties in identifying parameter values in the model used to estimate and extrapolate DNA protein cross-link levels. The authors of the CIIT modeling endeavor characterized their human risk estimates as "conservative in the face of modeling uncertainties." The uncertainties discussed in this article indicate that such a claim is premature.  相似文献   

8.
To better understand the risk of exposure to food allergens, food challenge studies are designed to slowly increase the dose of an allergen delivered to allergic individuals until an objective reaction occurs. These dose‐to‐failure studies are used to determine acceptable intake levels and are analyzed using parametric failure time models. Though these models can provide estimates of the survival curve and risk, their parametric form may misrepresent the survival function for doses of interest. Different models that describe the data similarly may produce different dose‐to‐failure estimates. Motivated by predictive inference, we developed a Bayesian approach to combine survival estimates based on posterior predictive stacking, where the weights are formed to maximize posterior predictive accuracy. The approach defines a model space that is much larger than traditional parametric failure time modeling approaches. In our case, we use the approach to include random effects accounting for frailty components. The methodology is investigated in simulation, and is used to estimate allergic population eliciting doses for multiple food allergens.  相似文献   

9.
Dose‐response models in microbial risk assessment consider two steps in the process ultimately leading to illness: from exposure to (asymptomatic) infection, and from infection to (symptomatic) illness. Most data and theoretical approaches are available for the exposure‐infection step; the infection‐illness step has received less attention. Furthermore, current microbial risk assessment models do not account for acquired immunity. These limitations may lead to biased risk estimates. We consider effects of both dose dependency of the conditional probability of illness given infection, and acquired immunity to risk estimates, and demonstrate their effects in a case study on exposure to Campylobacter jejuni. To account for acquired immunity in risk estimates, an inflation factor is proposed. The inflation factor depends on the relative rates of loss of protection over exposure. The conditional probability of illness given infection is based on a previously published model, accounting for the within‐host dynamics of illness. We find that at low (average) doses, the infection‐illness model has the greatest impact on risk estimates, whereas at higher (average) doses and/or increased exposure frequencies, the acquired immunity model has the greatest impact. The proposed models are strongly nonlinear, and reducing exposure is not expected to lead to a proportional decrease in risk and, under certain conditions, may even lead to an increase in risk. The impact of different dose‐response models on risk estimates is particularly pronounced when introducing heterogeneity in the population exposure distribution.  相似文献   

10.
Multimedia modelers from the United States Environmental Protection Agency (EPA) and the United States Department of Energy (DOE) collaborated to conduct a detailed and quantitative benchmarking analysis of three multimedia models. The three models—RESRAD (DOE), MMSOILS (EPA), and MEPAS (DOE)—represent analytically-based tools that are used by the respective agencies for performing human exposure and health risk assessments. The study is performed by individuals who participate directly in the ongoing design, development, and application of the models. Model form and function are compared by applying the models to a series of hypothetical problems, first isolating individual modules (e.g., atmospheric, surface water, groundwater) and then simulating multimedia-based risk resulting from contaminant release from a single source to multiple environmental media. Study results show that the models differ with respect to environmental processes included (i.e., model features) and the mathematical formulation and assumptions related to the implementation of solutions. Depending on the application, numerical estimates resulting from the models may vary over several orders-of-magnitude. On the other hand, two or more differences may offset each other such that model predictions are virtually equal. The conclusion from these results is that multimedia models are complex due to the integration of the many components of a risk assessment and this complexity must be fully appreciated during each step of the modeling process (i.e., model selection, problem conceptualization, model application, and interpretation of results).  相似文献   

11.
A Monte Carlo simulation is incorporated into a risk assessment for trichloroethylene (TCE) using physiologically-based pharmacokinetic (PBPK) modeling coupled with the linearized multistage model to derive human carcinogenic risk extrapolations. The Monte Carlo technique incorporates physiological parameter variability to produce a statistically derived range of risk estimates which quantifies specific uncertainties associated with PBPK risk assessment approaches. Both inhalation and ingestion exposure routes are addressed. Simulated exposure scenarios were consistent with those used by the Environmental Protection Agency (EPA) in their TCE risk assessment. Mean values of physiological parameters were gathered from the literature for both mice (carcinogenic bioassay subjects) and for humans. Realistic physiological value distributions were assumed using existing data on variability. Mouse cancer bioassay data were correlated to total TCE metabolized and area-under-the-curve (blood concentration) trichloroacetic acid (TCA) as determined by a mouse PBPK model. These internal dose metrics were used in a linearized multistage model analysis to determine dose metric values corresponding to 10-6 lifetime excess cancer risk. Using a human PBPK model, these metabolized doses were then extrapolated to equivalent human exposures (inhalation and ingestion). The Monte Carlo iterations with varying mouse and human physiological parameters produced a range of human exposure concentrations producing a 10-6 risk.  相似文献   

12.
Typically, the uncertainty affecting the parameters of physiologically based pharmacokinetic (PBPK) models is ignored because it is not currently practical to adjust their values using classical parameter estimation techniques. This issue of parametric variability in a physiological model of benzene pharmacokinetics is addressed in this paper. Monte Carlo simulations were used to study the effects on the model output arising from variability in its parameters. The output was classified into two categories, depending on whether the output of the model on a particular run was judged to be generally consistent with published experimental data. Statistical techniques were used to examine sensitivity and interaction in the parameter space. The model was evaluated against the data from three different experiments in order to test for the structural adequacy of the model and the consistency of the experimental results. The regions of the parameter space associated with various inhalation and gavage experiments are distinct, and the model as presently structured cannot adequately represent the outcomes of all experiments. Our results suggest that further effort is required to discern between the structural adequacy of the model and the consistency of the experimental results. The impact of our results on the risk assessment process for benzene is also examined.  相似文献   

13.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

14.
Proposed applications of increasingly sophisticated biologically-based computational models, such as physiologically-based pharmacokinetic models, raise the issue of how to evaluate whether the models are adequate for proposed uses, including safety or risk assessment. A six-step process for model evaluation is described. It relies on multidisciplinary expertise to address the biological, toxicological, mathematical, statistical, and risk assessment aspects of the modeling and its application. The first step is to have a clear definition of the purpose(s) of the model in the particular assessment; this provides critical perspectives on all subsequent steps. The second step is to evaluate the biological characterization described by the model structure based on the intended uses of the model and available information on the compound being modeled or related compounds. The next two steps review the mathematical equations used to describe the biology and their implementation in an appropriate computer program. At this point, the values selected for the model parameters (i.e., model calibration) must be evaluated. Thus, the fifth step is a combination of evaluating the model parameterization and calibration against data and evaluating the uncertainty in the model outputs. The final step is to evaluate specialized analyses that were done using the model, such as modeling of population distributions of parameters leading to population estimates for model outcomes or inclusion of early pharmacodynamic events. The process also helps to define the kinds of documentation that would be needed for a model to facilitate its evaluation and implementation.  相似文献   

15.
In any model the values of estimates for various parameters are obtained from different sources each with its own level of uncertainty. When the probability distributions of the estimates are obtained as opposed to point values only, the measurement uncertainties in the parameter estimates may be addressed. However, the sources used for obtaining the data and the models used to select appropriate distributions are of differing degrees of uncertainty. A hierarchy of different sources of uncertainty based upon one's ability to validate data and models empirically is presented. When model parameters are aggregated with different levels of the hierarchy represented, this implies distortion or degradation in the utility and validity of the models used. Means to identify and deal with such heterogeneous data sources are explored, and a number of approaches to addressing this problem is presented. One approach, using Range/Confidence Estimates coupled with an Information Value Analysis Process, is presented as an example.  相似文献   

16.
A Flexible Count Data Regression Model for Risk Analysis   总被引:1,自引:0,他引:1  
In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.  相似文献   

17.
Since the National Food Safety Initiative of 1997, risk assessment has been an important issue in food safety areas. Microbial risk assessment is a systematic process for describing and quantifying a potential to cause adverse health effects associated with exposure to microorganisms. Various dose-response models for estimating microbial risks have been investigated. We have considered four two-parameter models and four three-parameter models in order to evaluate variability among the models for microbial risk assessment using infectivity and illness data from studies with human volunteers exposed to a variety of microbial pathogens. Model variability is measured in terms of estimated ED01s and ED10s, with the view that these effective dose levels correspond to the lower and upper limits of the 1% to 10% risk range generally recommended for establishing benchmark doses in risk assessment. Parameters of the statistical models are estimated using the maximum likelihood method. In this article a weighted average of effective dose estimates from eight two- and three-parameter dose-response models, with weights determined by the Kullback information criterion, is proposed to address model uncertainties in microbial risk assessment. The proposed procedures for incorporating model uncertainties and making inferences are illustrated with human infection/illness dose-response data sets.  相似文献   

18.
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.  相似文献   

19.
采用不同的随机过程模型描述标的资产的价格动态,会极大的影响衍生品定价和风险管理活动。在文献中,同一资产采用的随机过程往往是不一致甚至是矛盾的。本文以GBM过程与OU过程为例,提出了一种统计推断方法,旨在从多个备选模型中选出能更好的描述标的资产价格动态的随机过程。该方法应用事后检验原理,将数据分成估计窗和检验窗,估计窗用来估计随机过程的参数,然后在模型参数不变的假定下,推导了原假设成立时检验窗各个时点的资产价格的样本外分布,看实际数据落在接受域或拒绝域的频数来判断是否接受原假设。本文以大宗商品、汇率、利率、股票作为标的资产,对随机过程选择进行了实证分析。实证结果表明,一些经常使用的随机过程模型并不一定是最优的模型。  相似文献   

20.
Application of Geostatistics to Risk Assessment   总被引:3,自引:0,他引:3  
Geostatistics offers two fundamental contributions to environmental contaminant exposure assessment: (1) a group of methods to quantitatively describe the spatial distribution of a pollutant and (2) the ability to improve estimates of the exposure point concentration by exploiting the geospatial information present in the data. The second contribution is particularly valuable when exposure estimates must be derived from small data sets, which is often the case in environmental risk assessment. This article addresses two topics related to the use of geostatistics in human and ecological risk assessments performed at hazardous waste sites: (1) the importance of assessing model assumptions when using geostatistics and (2) the use of geostatistics to improve estimates of the exposure point concentration (EPC) in the limited data scenario. The latter topic is approached here by comparing design-based estimators that are familiar to environmental risk assessors (e.g., Land's method) with geostatistics, a model-based estimator. In this report, we summarize the basics of spatial weighting of sample data, kriging, and geostatistical simulation. We then explore the two topics identified above in a case study, using soil lead concentration data from a Superfund site (a skeet and trap range). We also describe several areas where research is needed to advance the use of geostatistics in environmental risk assessment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号