首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Comparison of Six Dose-Response Models for Use with Food-Borne Pathogens   总被引:6,自引:0,他引:6  
Food-related illness in the United States is estimated to affect over six million people per year and cost the economy several billion dollars. These illnesses and costs could be reduced if minimum infectious doses were established and used as the basis of regulations and monitoring. However, standard methodologies for dose-response assessment are not yet formulated for microbial risk assessment. The objective of this study was to compare dose-response models for food-borne pathogens and determine which models were most appropriate for a range of pathogens. The statistical models proposed in the literature and chosen for comparison purposes were log-normal, log-logistic, exponential, -Poisson and Weibull-Gamma. These were fit to four data sets also taken from published literature, Shigella flexneri, Shigella dysenteriae,Campylobacter jejuni, and Salmonella typhosa, using the method of maximum likelihood. The Weibull-gamma, the only model with three parameters, was also the only model capable of fitting all the data sets examined using the maximum likelihood estimation for comparisons. Infectious doses were also calculated using each model. Within any given data set, the infectious dose estimated to affect one percent of the population ranged from one order of magnitude to as much as nine orders of magnitude, illustrating the differences in extrapolation of the dose response models. More data are needed to compare models and examine extrapolation from high to low doses for food-borne pathogens.  相似文献   

2.
Prediction of human cancer risk from the results of rodent bioassays requires two types of extrapolation: a qualitative extrapolation from short-lived rodent species to long-lived humans, and a quantitative extrapolation from near-toxic doses in the bioassay to low-level human exposures. Experimental evidence on the accuracy of prediction between closely related species tested under similar experimental conditions (rats, mice, and hamsters) indicates that: (1) if a chemical is positive in one species, it will be positive in the second species about 75% of the time; however, since about 50% of test chemicals are positive in each species, by chance alone one would expect a predictive value between species of about 50%. (2) If a chemical induces tumors in a particular target organ in one species, it will induce tumors in the same organ in the second species about 50% of the time. Similar predictive values are obtained in an analysis of prediction from humans to rats or from humans to mice for known human carcinogens. Limitations of bioassay data for use in quantitative extrapolation are discussed, including constraints on both estimates of carcinogenic potency and of the dose-response in experiments with only two doses and a control. Quantitative extrapolation should be based on an understanding of mechanisms of carcinogenesis, particularly mitogenic effects that are present at high and not low doses.  相似文献   

3.
Although analysis of in vivo pharmacokinetic data necessitates use of time-dependent physiologically-based pharmacokinetic (PBPK) models, risk assessment applications are often driven primarily by steady-state and/or integrated (e.g., AUC) dosimetry. To that end, we present an analysis of steady-state solutions to a PBPK model for a generic volatile chemical metabolized in the liver. We derive an equivalent model that is much simpler and contains many fewer parameters than the full PBPK model. The state of the system can be specified by two state variables-the rate of metabolism and the rate of clearance by exhalation. For a given oral dose rate or inhalation exposure concentration, the system state only depends on the blood-air partition coefficient, metabolic constants, and the rates of blood flow to the liver and of alveolar ventilation. At exposures where metabolism is close to linear, only the effective first-order metabolic rate is needed. Furthermore, in this case, the relationship between cumulative exposure and average internal dose (e.g., AUCs) remains the same for time-varying exposures. We apply our analysis to oral-inhalation route extrapolation, showing that for any dose metric, route equivalence only depends on the parameters that determine the system state. Even if the appropriate dose metric is unknown, bounds can be placed on the route-to-route equivalence with very limited data. We illustrate this analysis by showing that it reproduces exactly the PBPK-model-based route-to-route extrapolation in EPA's 2000 risk assessment for vinyl chloride. Overall, we find that in many cases, steady-state solutions exactly reproduce or closely approximate the solutions using the full PBPK model, while being substantially more transparent. Subsequent work will examine the utility of steady-state solutions for analyzing cross-species extrapolation and intraspecies variability.  相似文献   

4.
Aquatic non‐native invasive species are commonly traded in the worldwide water garden and aquarium markets, and some of these species pose major threats to the economy, the environment, and human health. Understanding the potential suitable habitat for these species at a global scale and at regional scales can inform risk assessments and predict future potential establishment. Typically, global habitat suitability models are fit for freshwater species with only climate variables, which provides little information about suitable terrestrial conditions for aquatic species. Remotely sensed data including topography and land cover data have the potential to improve our understanding of suitable habitat for aquatic species. In this study, we fit species distribution models using five different model algorithms for three non‐native aquatic invasive species with bioclimatic, topographic, and remotely sensed covariates to evaluate potential suitable habitat beyond simple climate matches. The species examined included a frog (Xenopus laevis), toad (Bombina orientalis), and snail (Pomacea spp.). Using a unique modeling approach for each species including background point selection based on known established populations resulted in robust ensemble habitat suitability models. All models for all species had test area under the receiver operating characteristic curve values greater than 0.70 and percent correctly classified values greater than 0.65. Importantly, we employed multivariate environmental similarity surface maps to evaluate potential extrapolation beyond observed conditions when applying models globally. These global models provide necessary forecasts of where these aquatic invasive species have the potential for establishment outside their native range, a key component in risk analyses.  相似文献   

5.
What is the effect of the future on today's decisions? The future plays a part in all of our decisions whether we utilize formal forecasting techniques or not. Some of the uncertainty of the future can be reduced by applying one or more of the techniques of trend extrapolation, subjective opinion of experts, and construction of scenarios. The results, to be useful to decision makers, must be pertinent, credible and capable of realization.  相似文献   

6.
Cancer risks for ethylene dibromide (EDB) were estimated by fitting several linear non-threshold additive models to data from a gavage bioassay. Risks predicted by these models were compared to the observed cancer mortality among a cohort of workers occupationally exposed to the same chemical. Models that accounted for the shortened latency period in the gavaged rats predicted upper bound risks that were within a factor of 3 of the observed cancer deaths. Data from an animal inhalation study of EDB also were compatible with the epidemiologic data. These findings contradict those of Ramsey et al. (1978), who reported that extrapolation from animal data produced highly exaggerated risk estimates for EDB-exposed workers. This paper explores the reasons for these discrepant findings.  相似文献   

7.
In multi-stage production-inventory systems, the demand for precursor items derives from the demand for successor items at the later stages in the system. This paper presents a method for evaluating the performance of two different strategies for forecasting the demand for precursor items. The dependent strategy relies on successor item forecasts to build a precursor item forecast, while the independent strategy relies on the extrapolation of past precursor demand. An expression indicating the conditions under which one strategy is preferred to the other is developed, and the effect of changing these conditions is illustrated with simulation results.  相似文献   

8.
The U.S. Environmental Protection Agency (USEPA) guidelines for cancer risk assessment recognize that some chemical carcinogens may have a site-specific mode of action (MOA) involving mutation and cell-killing-induced hyperplasia. The guidelines recommend that for such dual MOA (DMOA) carcinogens, judgment should be used to compare and assess results using separate "linear" (genotoxic) versus "nonlinear" (nongenotoxic) approaches to low-level risk extrapolation. Because the guidelines allow this only when evidence supports reliable risk extrapolation using a validated mechanistic model, they effectively prevent addressing MOA uncertainty when data do not fully validate such a model but otherwise clearly support a DMOA. An adjustment-factor approach is proposed to address this gap, analogous to reference-dose procedures used for classic toxicity endpoints. By this method, even when a "nonlinear" toxicokinetic model cannot be fully validated, the effect of DMOA uncertainty on low-dose risk can be addressed. Application of the proposed approach was illustrated for the case of risk extrapolation from bioassay data on rat nasal tumors induced by chronic lifetime exposure to naphthalene. Bioassay data, toxicokinetic data, and pharmacokinetic analyses were determined to indicate that naphthalene is almost certainly a DMOA carcinogen. Plausibility bounds on rat-tumor-type-specific DMOA-related uncertainty were obtained using a mechanistic two-stage cancer risk model adapted to reflect the empirical link between genotoxic and cytotoxic effects of the most potent identified genotoxic naphthalene metabolites, 1,2- and 1,4-naphthoquinone. Bound-specific adjustment factors were then used to reduce naphthalene risk estimated by linear extrapolation (under the default genotoxic MOA assumption), to account for the DMOA exhibited by this compound.  相似文献   

9.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

10.
Use of Mechanistic Models to Estimate Low-Dose Cancer Risks   总被引:1,自引:0,他引:1  
Kenny S. Crump 《Risk analysis》1994,14(6):1033-1038
The utility of mechanistic models of cancer for predicting cancer risks at low doses is examined. Based upon a general approximation to the dose-response that is valid at low doses, it is shown that at low doses the dose-response predicted by a mechanistic model is a linear combination of the dose-responses for each of the physiological parameters in the model that are affected by exposure. This demonstrates that, unless the mechanistic model provides a theoretical basis for determining the dose-responses for these parameters, the extrapolation of risks to low doses using a mechanistic model is basically "curve fitting," just as is the case when extrapolating using statistical models. This suggests that experiments to generate data for use in mechanistic models should emphasize measuring the dose-response for dose-related parameters as accurately as possible and at the lowest feasible doses.  相似文献   

11.
A Distributional Approach to Characterizing Low-Dose Cancer Risk   总被引:2,自引:0,他引:2  
Since cancer risk at very low doses cannot be directly measured in humans or animals, mathematical extrapolation models and scientific judgment are required. This article demonstrates a probabilistic approach to carcinogen risk assessment that employs probability trees, subjective probabilities, and standard bootstrapping procedures. The probabilistic approach is applied to the carcinogenic risk of formaldehyde in environmental and occupational settings. Sensitivity analyses illustrate conditional estimates of risk for each path in the probability tree. Fundamental mechanistic uncertainties are characterized. A strength of the analysis is the explicit treatment of alternative beliefs about pharmacokinetics and pharmacodynamics. The resulting probability distributions on cancer risk are compared with the point estimates reported by federal agencies. Limitations of the approach are discussed as well as future research directions.  相似文献   

12.
There has been considerable discussion regarding the conservativeness of low-dose cancer risk estimates based upon linear extrapolation from upper confidence limits. Various groups have expressed a need for best (point) estimates of cancer risk in order to improve risk/benefit decisions. Point estimates of carcinogenic potency obtained from maximum likelihood estimates of low-dose slope may be highly unstable, being sensitive both to the choice of the dose–response model and possibly to minimal perturbations of the data. For carcinogens that augment background carcinogenic processes and/or for mutagenic carcinogens, at low doses the tumor incidence versus target tissue dose is expected to be linear. Pharmacokinetic data may be needed to identify and adjust for exposure-dose nonlinearities. Based on the assumption that the dose response is linear over low doses, a stable point estimate for low-dose cancer risk is proposed. Since various models give similar estimates of risk down to levels of 1%, a stable estimate of the low-dose cancer slope is provided by ŝ = 0.01/ED01, where ED01 is the dose corresponding to an excess cancer risk of 1%. Thus, low-dose estimates of cancer risk are obtained by, risk = ŝ × dose. The proposed procedure is similar to one which has been utilized in the past by the Center for Food Safety and Applied Nutrition, Food and Drug Administration. The upper confidence limit, s , corresponding to this point estimate of low-dose slope is similar to the upper limit, q 1 obtained from the generalized multistage model. The advantage of the proposed procedure is that ŝ provides stable estimates of low-dose carcinogenic potency, which are not unduly influenced by small perturbations of the tumor incidence rates, unlike 1.  相似文献   

13.
Certification is an essential feature in organic farming, and it is based on inspections to verify compliance with respect to European Council Regulation—EC Reg. No 834/2007. A risk‐based approach to noncompliance that alerts the control bodies to activate planning inspections would contribute to a more efficient and cost‐effective certification system. An analysis of factors that can affect the probability of noncompliance in organic farming has thus been developed. This article examines the application of zero‐inflated count data models to farm‐level panel data from inspection results and sanctions obtained from the Ethical and Environmental Certification Institute, one of the main control bodies in Italy. We tested many a priori hypotheses related to the risk of noncompliance. We find evidence of an important role for past noncompliant behavior in predicting future noncompliance, while farm size and the occurrence of livestock also have roles in an increased probability of noncompliance. We conclude the article proposing that an efficient risk‐based inspection system should be designed, weighting up the known probability of occurrence of a given noncompliance according to the severity of its impact.  相似文献   

14.
The only logical way to avoid unnecessary future costs and improve quality is to analyze the past, providing input for the present. The informational management strategies are ready. There is continuous quality improvement, profiling with case-mix adjustment, and other techniques that will help us manage care and caring. But these strategies all rely on the customer being empowered to make truly informed decisions (the ethical principle of autonomy) and for us to advocate for patients (beneficence). Translating relevant data into information is the concentration of medical informatics. Virtually everyone in medical management can attest to the fact that, competitive forces notwithstanding, it is time for us to recognize not only that we are in the information business, but also that this information belongs to a larger community. Clearly, it is time to collaborate!  相似文献   

15.
Because of the inherent complexity of biological systems, there is often a choice between a number of apparently equally applicable physiologically based models to describe uptake and metabolism processes in toxicology or risk assessment. These models may fit the particular data sets of interest equally well, but may give quite different parameter estimates or predictions under different (extrapolated) conditions. Such competing models can be discriminated by a number of methods, including potential refutation by means of strategic experiments, and their ability to suitably incorporate all relevant physiological processes. For illustration, three currently used models for steady-state hepatic elimination--the venous equilibration model, the parallel tube model, and the distributed sinusoidal perfusion model--are reviewed and compared with particular reference to their application in the area of risk assessment. The ability of each of the models to describe and incorporate such physiological processes as protein binding, precursor-metabolite relations and hepatic zones of elimination, capillary recruitment, capillary heterogeneity, and intrahepatic shunting is discussed. Differences between the models in hepatic parameter estimation, extrapolation to different conditions, and interspecies scaling are discussed, and criteria for choosing one model over the others are presented. In this case, the distributed model provides the most general framework for describing physiological processes taking place in the liver, and has so far not been experimentally refuted, as have the other two models. These simpler models may, however, provide useful bounds on parameter estimates and on extrapolations and risk assessments.  相似文献   

16.
零售顾客的经济价值是由购买次数和平均购买金额共同决定的。本文采用NBD模型来拟合购买次数,用gamma-gamma模型来拟合平均购买金额。基于贝叶斯原理,在给定过去购买行为条件下我们可以计算顾客未来购买次数和平均购买金额的期望值,顾客的未来经济价值就是这两个期望值的乘积。本文应用上述随机模型对一家零售企业的顾客积分卡数据进行了实证分析,结果表明该模型不仅可以比较准确地拟合顾客购买次数和购买金额数据,而且可以对顾客未来价值进行较为准确的预测。该模型方法对于零售企业加强顾客分析,提高顾客管理水平有很大的价值。  相似文献   

17.
We compare the regulatory implications of applying the traditional (linearized) and exact two-stage dose–response models to animal carcinogenic data. We analyze dose–response data from six studies, representing five different substances, and we determine the goodness-of-fit of each model as well as the 95% confidence lower limit of the dose corresponding to a target excess risk of 10–5 (the target risk dose TRD). For the two concave datasets, we find that the exact model gives a substantially better fit to the data than the traditional model, and that the exact model gives a TRD that is an order of magnitude lower than that given by the traditional model. In the other cases, the exact model gives a fit equivalent to or better than the traditional model. We also show that although the exact two-stage model may exhibit dose–response concavity at moderate dose levels, it is always linear or sublinear, and never supralinear, in the low-dose limit. Because regulatory concern is almost always confined to the low-dose region extrapolation, supralinear behavior seems not to be of regulatory concern in the exact two-stage model. Finally, we find that when performing this low-dose extrapolation in cases of dose–response concavity, extrapolating the model fit leads to a more conservative TRD than taking a linear extrapolation from 10% excess risk. We conclude with a set of recommendations.  相似文献   

18.
The choice of a dose-response model is decisive for the outcome of quantitative risk assessment. Single-hit models have played a prominent role in dose-response assessment for pathogenic microorganisms, since their introduction. Hit theory models are based on a few simple concepts that are attractive for their clarity and plausibility. These models, in particular the Beta Poisson model, are used for extrapolation of experimental dose-response data to low doses, as are often present in drinking water or food products. Unfortunately, the Beta Poisson model, as it is used throughout the microbial risk literature, is an approximation whose validity is not widely known. The exact functional relation is numerically complex, especially for use in optimization or uncertainty analysis. Here it is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications. Errors may become very large, however, in the results of uncertainty analysis, or when the data contain little low-dose information. One striking property of the exact single-hit model is that it has a maximum risk curve, limiting the upper confidence level of the dose-response relation. This is due to the fact that the risk cannot exceed the probability of exposure, a property that is not retained in the Beta Poisson approximation. This maximum possible response curve is important for uncertainty analysis, and for risk assessment of pathogens with unknown properties.  相似文献   

19.
Bus scheduling is essential to a carrier's profitability, its level of service and its competitiveness in the market. In past research most inter-city bus scheduling models have used only the projected (or average) market share and market demand, meaning that the variations in daily passenger demand that occur in actual operations are neglected. In this research, however, we do not utilize a fixed market share and market demand. Instead, passenger choice behaviors and uncertain market demands are considered. Stochastic and robust optimizations and a passenger choice model are used to develop the models. These models are formulated as a nonlinear integer program that is characterized as NP-hard. We also develop a solution algorithm to efficiently solve the models. They are tested using data from a major Taiwan inter-city bus operation. The results show the good performance of the models and the solution algorithm.  相似文献   

20.
本文利用实物期权方法对授权决策的核心问题——授权时机和授权对象的选择——进行了研究。首先分析了授权决策的期权特征,对授权决策中存在的实物期权进行了总结;然后构建了授权决策的实物期权模型,分别计算出了授权决策的期权价值和授权时员工对公司的人力资本价值,由此得出了一个可比性的授权决策标准,即临界业绩水平,从而简化了决策指标,提高了信息的利用率,使企业可以对特质各异的候选人进行准确比较,并分三种授权决策情形分别给出了决策方案,最后基于数值算例作出了进一步的解析。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号