首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
For dose–response analysis in quantitative microbial risk assessment (QMRA), the exact beta‐Poisson model is a two‐parameter mechanistic dose–response model with parameters and , which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting as the probability of infection at a given mean dose d, the widely used dose–response model is an approximate formula for the exact beta‐Poisson model. Notwithstanding the required conditions and , issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | , ) as a validity measure (r is a random variable that follows a gamma distribution; and are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions for as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | , ) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta‐Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | , ), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta‐Poisson model dose–response curve.  相似文献   

2.
Dose–response modeling of biological agents has traditionally focused on describing laboratory‐derived experimental data. Limited consideration has been given to understanding those factors that are controlled in a laboratory, but are likely to occur in real‐world scenarios. In this study, a probabilistic framework is developed that extends Brookmeyer's competing‐risks dose–response model to allow for variation in factors such as dose‐dispersion, dose‐deposition, and other within‐host parameters. With data sets drawn from dose–response experiments of inhalational anthrax, plague, and tularemia, we illustrate how for certain cases, there is the potential for overestimation of infection numbers arising from models that consider only the experimental data in isolation.  相似文献   

3.
Dose‐response analysis of binary developmental data (e.g., implant loss, fetal abnormalities) is best done using individual fetus data (identified to litter) or litter‐specific statistics such as number of offspring per litter and proportion abnormal. However, such data are not often available to risk assessors. Scientific articles usually present only dose‐group summaries for the number or average proportion abnormal and the total number of fetuses. Without litter‐specific data, it is not possible to estimate variances correctly (often characterized as a problem of overdispersion, intralitter correlation, or “litter effect”). However, it is possible to use group summary data when the design effect has been estimated for each dose group. Previous studies have demonstrated useful dose‐response and trend test analyses based on design effect estimates using litter‐specific data from the same study. This simplifies the analysis but does not help when litter‐specific data are unavailable. In the present study, we show that summary data on fetal malformations can be adjusted satisfactorily using estimates of the design effect based on historical data. When adjusted data are then analyzed with models designed for binomial responses, the resulting benchmark doses are similar to those obtained from analyzing litter‐level data with nested dichotomous models.  相似文献   

4.
The dose‐response analyses of cancer and noncancer health effects of aldrin and dieldrin were evaluated using current methodology, including benchmark dose analysis and the current U.S. Environmental Protection Agency (U.S. EPA) guidance on body weight scaling and uncertainty factors. A literature review was performed to determine the most appropriate adverse effect endpoints. Using current methodology and information, the estimated reference dose values were 0.0001 and 0.00008 mg/kg‐day for aldrin and dieldrin, respectively. The estimated cancer slope factors for aldrin and dieldrin were 3.4 and 7.0 (mg/kg‐day)?1, respectively (i.e., about 5‐ and 2.3‐fold lower risk than the 1987 U.S. EPA assessments). Because aldrin and dieldrin are no longer used as pesticides in the United States, they are presumed to be a low priority for additional review by the U.S. EPA. However, because they are persistent and still detected in environmental samples, quantitative risk assessments based on the best available methods are required. Recent epidemiologic studies do not demonstrate a causal association between aldrin and dieldrin and human cancer risk. The proposed reevaluations suggest that these two compounds pose a lower human health risk than currently reported by the U.S. EPA.  相似文献   

5.
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.  相似文献   

6.
This paper proposes a new nested algorithm (NPL) for the estimation of a class of discrete Markov decision models and studies its statistical and computational properties. Our method is based on a representation of the solution of the dynamic programming problem in the space of conditional choice probabilities. When the NPL algorithm is initialized with consistent nonparametric estimates of conditional choice probabilities, successive iterations return a sequence of estimators of the structural parameters which we call K–stage policy iteration estimators. We show that the sequence includes as extreme cases a Hotz–Miller estimator (for K=1) and Rust's nested fixed point estimator (in the limit when K→∞). Furthermore, the asymptotic distribution of all the estimators in the sequence is the same and equal to that of the maximum likelihood estimator. We illustrate the performance of our method with several examples based on Rust's bus replacement model. Monte Carlo experiments reveal a trade–off between finite sample precision and computational cost in the sequence of policy iteration estimators.  相似文献   

7.
Invasive aspergillosis (IA) is a major cause of mortality in immunocompromized hosts, most often consecutive to the inhalation of spores of Aspergillus. However, the relationship between Aspergillus concentration in the air and probability of IA is not quantitatively known. In this study, this relationship was examined in a murine model of IA. Immunosuppressed Balb/c mice were exposed for 60 minutes at day 0 to an aerosol of A. fumigatus spores (Af293 strain). At day 10, IA was assessed in mice by quantitative culture of the lungs and galactomannan dosage. Fifteen separate nebulizations with varying spore concentrations were performed. Rates of IA ranged from 0% to 100% according to spore concentrations. The dose‐response relationship between probability of infection and spore exposure was approximated using the exponential model and the more flexible beta‐Poisson model. Prior distributions of the parameters of the models were proposed then updated with data in a Bayesian framework. Both models yielded close median dose‐responses of the posterior distributions for the main parameter of the model, but with different dispersions, either when the exposure dose was the concentration in the nebulized suspension or was the estimated quantity of spores inhaled by a mouse during the experiment. The median quantity of inhaled spores that infected 50% of mice was estimated at 1.8 × 104 and 3.2 × 104 viable spores in the exponential and beta‐Poisson models, respectively. This study provides dose‐response parameters for quantitative assessment of the relationship between airborne exposure to the reference A. fumigatus strain and probability of IA in immunocompromized hosts.  相似文献   

8.
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson‐distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional “single‐hit” dose‐response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose‐response models in terms of probability generating functions. It is shown formally that the theoretical single‐hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single‐hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single‐hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose‐response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose‐response assessment as well as practical risk characterization are discussed.  相似文献   

9.
The application of quantitative microbial risk assessments (QMRAs) to understand and mitigate risks associated with norovirus is increasingly common as there is a high frequency of outbreaks worldwide. A key component of QMRA is the dose–response analysis, which is the mathematical characterization of the association between dose and outcome. For Norovirus, multiple dose–response models are available that assume either a disaggregated or an aggregated intake dose. This work reviewed the dose–response models currently used in QMRA, and compared predicted risks from waterborne exposures (recreational and drinking) using all available dose–response models. The results found that the majority of published QMRAs of norovirus use the 1F1 hypergeometric dose–response model with α = 0.04, β = 0.055. This dose–response model predicted relatively high risk estimates compared to other dose–response models for doses in the range of 1–1,000 genomic equivalent copies. The difference in predicted risk among dose–response models was largest for small doses, which has implications for drinking water QMRAs where the concentration of norovirus is low. Based on the review, a set of best practices was proposed to encourage the careful consideration and reporting of important assumptions in the selection and use of dose–response models in QMRA of norovirus. Finally, in the absence of one best norovirus dose–response model, multiple models should be used to provide a range of predicted outcomes for probability of infection.  相似文献   

10.
《Risk analysis》2018,38(8):1685-1700
Military health risk assessors, medical planners, operational planners, and defense system developers require knowledge of human responses to doses of biothreat agents to support force health protection and chemical, biological, radiological, nuclear (CBRN) defense missions. This article reviews extensive data from 118 human volunteers administered aerosols of the bacterial agent Francisella tularensis , strain Schu S4, which causes tularemia. The data set includes incidence of early‐phase febrile illness following administration of well‐characterized inhaled doses of F. tularensis . Supplemental data on human body temperature profiles over time available from de‐identified case reports is also presented. A unified, logically consistent model of early‐phase febrile illness is described as a lognormal dose–response function for febrile illness linked with a stochastic time profile of fever. Three parameters are estimated from the human data to describe the time profile: incubation period or onset time for fever; rise time of fever; and near‐maximum body temperature. Inhaled dose‐dependence and variability are characterized for each of the three parameters. These parameters enable a stochastic model for the response of an exposed population through incorporation of individual‐by‐individual variability by drawing random samples from the statistical distributions of these three parameters for each individual. This model provides risk assessors and medical decisionmakers reliable representations of the predicted health impacts of early‐phase febrile illness for as long as one week after aerosol exposures of human populations to F. tularensis .  相似文献   

11.
Listeria monocytogenes is a leading cause of hospitalization, fetal loss, and death due to foodborne illnesses in the United States. A quantitative assessment of the relative risk of listeriosis associated with the consumption of 23 selected categories of ready‐to‐eat foods, published by the U.S. Department of Health and Human Services and the U.S. Department of Agriculture in 2003, has been instrumental in identifying the food products and practices that pose the greatest listeriosis risk and has guided the evaluation of potential intervention strategies. Dose‐response models, which quantify the relationship between an exposure dose and the probability of adverse health outcomes, were essential components of the risk assessment. However, because of data gaps and limitations in the available data and modeling approaches, considerable uncertainty existed. Since publication of the risk assessment, new data have become available for modeling L. monocytogenes dose‐response. At the same time, recent advances in the understanding of L. monocytogenes pathophysiology and strain diversity have warranted a critical reevaluation of the published dose‐response models. To discuss strategies for modeling L. monocytogenes dose‐response, the Interagency Risk Assessment Consortium (IRAC) and the Joint Institute for Food Safety and Applied Nutrition (JIFSAN) held a scientific workshop in 2011 (details available at http://foodrisk.org/irac/events/ ). The main findings of the workshop and the most current and relevant data identified during the workshop are summarized and presented in the context of L. monocytogenes dose‐response. This article also discusses new insights on dose‐response modeling for L. monocytogenes and research opportunities to meet future needs.  相似文献   

12.
Historically, U.S. regulators have derived cancer slope factors by using applied dose and tumor response data from a single key bioassay or by averaging the cancer slope factors of several key bioassays. Recent changes in U.S. Environmental Protection Agency (EPA) guidelines for cancer risk assessment have acknowledged the value of better use of mechanistic data and better dose–response characterization. However, agency guidelines may benefit from additional considerations presented in this paper. An exploratory study was conducted by using rat brain tumor data for acrylonitrile (AN) to investigate the use of physiologically based pharmacokinetic (PBPK) modeling along with pooling of dose–response data across routes of exposure as a means for improving carcinogen risk assessment methods. In this study, two contrasting assessments were conducted for AN-induced brain tumors in the rat on the basis of (1) the EPA's approach, the dose–response relationship was characterized by using administered dose/concentration for each of the key studies assessed individually; and (2) an analysis of the pooled data, the dose–response relationship was characterized by using PBPK-derived internal dose measures for a combined database of ten bioassays. The cancer potencies predicted for AN by the contrasting assessments are remarkably different (i.e., risk-specific doses differ by as much as two to four orders of magnitude), with the pooled data assessments yielding lower values. This result suggests that current carcinogen risk assessment practices overestimate AN cancer potency. This methodology should be equally applicable to other data-rich chemicals in identifying (1) a useful dose measure, (2) an appropriate dose–response model, (3) an acceptable point of departure, and (4) an appropriate method of extrapolation from the range of observation to the range of prediction when a chemical's mode of action remains uncertain.  相似文献   

13.
The application of the exponential model is extended by the inclusion of new nonhuman primate (NHP), rabbit, and guinea pig dose‐lethality data for inhalation anthrax. Because deposition is a critical step in the initiation of inhalation anthrax, inhaled doses may not provide the most accurate cross‐species comparison. For this reason, species‐specific deposition factors were derived to translate inhaled dose to deposited dose. Four NHP, three rabbit, and two guinea pig data sets were utilized. Results from species‐specific pooling analysis suggested all four NHP data sets could be pooled into a single NHP data set, which was also true for the rabbit and guinea pig data sets. The three species‐specific pooled data sets could not be combined into a single generic mammalian data set. For inhaled dose, NHPs were the most sensitive (relative lowest LD50) species and rabbits the least. Improved inhaled LD50s proposed for use in risk assessment are 50,600, 102,600, and 70,800 inhaled spores for NHP, rabbit, and guinea pig, respectively. Lung deposition factors were estimated for each species using published deposition data from Bacillus spore exposures, particle deposition studies, and computer modeling. Deposition was estimated at 22%, 9%, and 30% of the inhaled dose for NHP, rabbit, and guinea pig, respectively. When the inhaled dose was adjusted to reflect deposited dose, the rabbit animal model appears the most sensitive with the guinea pig the least sensitive species.  相似文献   

14.
ARCH and GARCH models directly address the dependency of conditional second moments, and have proved particularly valuable in modelling processes where a relatively large degree of fluctuation is present. These include financial time series, which can be particularly heavy tailed. However, little is known about properties of ARCH or GARCH models in the heavy–tailed setting, and no methods are available for approximating the distributions of parameter estimators there. In this paper we show that, for heavy–tailed errors, the asymptotic distributions of quasi–maximum likelihood parameter estimators in ARCH and GARCH models are nonnormal, and are particularly difficult to estimate directly using standard parametric methods. Standard bootstrap methods also fail to produce consistent estimators. To overcome these problems we develop percentile–t, subsample bootstrap approximations to estimator distributions. Studentizing is employed to approximate scale, and the subsample bootstrap is used to estimate shape. The good performance of this approach is demonstrated both theoretically and numerically.  相似文献   

15.
This article describes several approaches for estimating the benchmark dose (BMD) in a risk assessment study with quantal dose‐response data and when there are competing model classes for the dose‐response function. Strategies involving a two‐step approach, a model‐averaging approach, a focused‐inference approach, and a nonparametric approach based on a PAVA‐based estimator of the dose‐response function are described and compared. Attention is raised to the perils involved in data “double‐dipping” and the need to adjust for the model‐selection stage in the estimation procedure. Simulation results are presented comparing the performance of five model selectors and eight BMD estimators. An illustration using a real quantal‐response data set from a carcinogenecity study is provided.  相似文献   

16.
Benefit–cost analysis is widely used to evaluate alternative courses of action that are designed to achieve policy objectives. Although many analyses take uncertainty into account, they typically only consider uncertainty about cost estimates and physical states of the world, whereas uncertainty about individual preferences, thus the benefit of policy intervention, is ignored. Here, we propose a strategy to integrate individual uncertainty about preferences into benefit–cost analysis using societal preference intervals, which are ranges of values over which it is unclear whether society as a whole should accept or reject an option. To illustrate the method, we use preferences for implementing a smart grid technology to sustain critical electricity demand during a 24‐hour regional power blackout on a hot summer weekend. Preferences were elicited from a convenience sample of residents in Allegheny County, Pennsylvania. This illustrative example shows that uncertainty in individual preferences, when aggregated to form societal preference intervals, can substantially change society's decision. We conclude with a discussion of where preference uncertainty comes from, how it might be reduced, and why incorporating unresolved preference uncertainty into benefit–cost analyses can be important.  相似文献   

17.
In this paper we derive the asymptotic properties of within groups (WG), GMM, and LIML estimators for an autoregressive model with random effects when both T and N tend to infinity. GMM and LIML are consistent and asymptotically equivalent to the WG estimator. When T/N→ 0 the fixed T results for GMM and LIML remain valid, but WG, although consistent, has an asymptotic bias in its asymptotic distribution. When T/N tends to a positive constant, the WG, GMM, and LIML estimators exhibit negative asymptotic biases of order 1/T, 1/N, and 1/(2NT), respectively. In addition, the crude GMM estimator that neglects the autocorrelation in first differenced errors is inconsistent as T/Nc>0, despite being consistent for fixed T. Finally, we discuss the properties of a random effects pseudo MLE with unrestricted initial conditions when both T and N tend to infinity.  相似文献   

18.
Various methods for risk characterization have been developed using probabilistic approaches. Data on Vietnamese farmers are available for the comparison of outcomes for risk characterization using different probabilistic methods. This article addresses the health risk characterization of chlorpyrifos using epidemiological dose‐response data and probabilistic techniques obtained from a case study with rice farmers in Vietnam. Urine samples were collected from farmers and analyzed for trichloropyridinol (TCP), which was converted into absorbed daily dose of chlorpyrifos. Adverse health response doses due to chlorpyrifos exposure were collected from epidemiological studies to develop dose‐adverse health response relationships. The health risk of chlorpyrifos was quantified using hazard quotient (HQ), Monte Carlo simulation (MCS), and overall risk probability (ORP) methods. With baseline (prior to pesticide spraying) and lifetime exposure levels (over a lifetime of pesticide spraying events), the HQ ranged from 0.06 to 7.1. The MCS method indicated less than 0.05% of the population would be affected while the ORP method indicated that less than 1.5% of the population would be adversely affected. With postapplication exposure levels, the HQ ranged from 1 to 32.5. The risk calculated by the MCS method was that 29% of the population would be affected, and the risk calculated by ORP method was 33%. The MCS and ORP methods have advantages in risk characterization due to use of the full distribution of data exposure as well as dose response, whereas HQ methods only used the exposure data distribution. These evaluations indicated that single‐event spraying is likely to have adverse effects on Vietnamese rice farmers.  相似文献   

19.
Manufacturing firms are increasingly seeking cost and other competitive advantages by tightly coupling and managing their relationship with suppliers. Among other mechanisms, interorganizational systems (IOS) that facilitate boundary‐spanning activities of a firm enable them to effectively manage different types of buyer–supplier relationships. This study integrates literature from the operations and information systems fields to create a joint perspective in understanding the linkages between the nature of the IOS, buyer–supplier relationships, and manufacturing performance at the dyadic level. External integration, breadth, and initiation are used to capture IOS functionality, and their effect on process efficiency and sourcing leverage is examined. The study also explores the differences in how manufacturing firms use IOS when operating under varying levels of competitive intensity and product standardization. In order to test the research models and related hypothesis, empirical data on buyer–supplier dyads is collected from manufacturing firms. The results show that only higher levels of external integration that go beyond simple procurement systems, as well as who initiates the IOS, allow manufacturing firms to enhance process efficiency. In contrast, IOS breadth and IOS initiation enable manufacturing firms to enhance sourcing leverage over their suppliers. In addition, firms making standardized products in highly competitive environments tend to achieve higher process efficiencies and have higher levels of external integration. The study shows how specific IOS decisions allow manufacturing firms to better manage their dependence on the supplier for resources and thereby select system functionalities that are consistent with their own operating environments and the desired supply chain design.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号