首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For dose–response analysis in quantitative microbial risk assessment (QMRA), the exact beta‐Poisson model is a two‐parameter mechanistic dose–response model with parameters and , which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting as the probability of infection at a given mean dose d, the widely used dose–response model is an approximate formula for the exact beta‐Poisson model. Notwithstanding the required conditions and , issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | , ) as a validity measure (r is a random variable that follows a gamma distribution; and are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions for as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | , ) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta‐Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | , ), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta‐Poisson model dose–response curve.  相似文献   

2.
In determining their operations strategy, a firm chooses whether to be responsive or efficient. For firms competing in a market with uncertain demand and varying intensity of substitutability for the competitor's product, we characterize the responsive or efficient choice in equilibrium. To focus first on the competitive implications, we study a model where a firm can choose to be responsive at no additional fixed or marginal cost. We find that competing firms will choose the same configuration (responsive or efficient), and responsiveness tends to be favorable when demand uncertainty is high or when product competition is not too strong. Intense competition can drive firms to choose to be efficient rather than responsive even when there is no additional cost of being responsive. In such a case, both firms would be better off by choosing to be responsive but cannot credibly commit. We extend the basic model to study the impact of endogenized production timing, multiple productions and product holdback (or, equivalently, postponed production). For all these settings, we find structurally similar results; firms choose the same configuration, and the firms may miss Pareto‐improvements. Furthermore, through extensions to the basic model, we find that greater operational flexibility can make responsiveness look less attractive in the presence of product competition. In contrast to our basic model and other extensions, we find it is possible for one firm to be responsive while the other is efficient when there is either a fixed cost or variable cost premium associated with responsive delivery.  相似文献   

3.
Dose–response modeling of biological agents has traditionally focused on describing laboratory‐derived experimental data. Limited consideration has been given to understanding those factors that are controlled in a laboratory, but are likely to occur in real‐world scenarios. In this study, a probabilistic framework is developed that extends Brookmeyer's competing‐risks dose–response model to allow for variation in factors such as dose‐dispersion, dose‐deposition, and other within‐host parameters. With data sets drawn from dose–response experiments of inhalational anthrax, plague, and tularemia, we illustrate how for certain cases, there is the potential for overestimation of infection numbers arising from models that consider only the experimental data in isolation.  相似文献   

4.
In this paper, we propose a simple bias–reduced log–periodogram regression estimator, ^dr, of the long–memory parameter, d, that eliminates the first– and higher–order biases of the Geweke and Porter–Hudak (1983) (GPH) estimator. The bias–reduced estimator is the same as the GPH estimator except that one includes frequencies to the power 2k for k=1,…,r, for some positive integer r, as additional regressors in the pseudo–regression model that yields the GPH estimator. The reduction in bias is obtained using assumptions on the spectrum only in a neighborhood of the zero frequency. Following the work of Robinson (1995b) and Hurvich, Deo, and Brodsky (1998), we establish the asymptotic bias, variance, and mean–squared error (MSE) of ^dr, determine the asymptotic MSE optimal choice of the number of frequencies, m, to include in the regression, and establish the asymptotic normality of ^dr. These results show that the bias of ^dr goes to zero at a faster rate than that of the GPH estimator when the normalized spectrum at zero is sufficiently smooth, but that its variance only is increased by a multiplicative constant. We show that the bias–reduced estimator ^dr attains the optimal rate of convergence for a class of spectral densities that includes those that are smooth of order s≥1 at zero when r≥(s−2)/2 and m is chosen appropriately. For s>2, the GPH estimator does not attain this rate. The proof uses results of Giraitis, Robinson, and Samarov (1997). We specify a data–dependent plug–in method for selecting the number of frequencies m to minimize asymptotic MSE for a given value of r. Some Monte Carlo simulation results for stationary Gaussian ARFIMA (1, d, 1) and (2, d, 0) models show that the bias–reduced estimators perform well relative to the standard log–periodogram regression estimator.  相似文献   

5.
The application of quantitative microbial risk assessments (QMRAs) to understand and mitigate risks associated with norovirus is increasingly common as there is a high frequency of outbreaks worldwide. A key component of QMRA is the dose–response analysis, which is the mathematical characterization of the association between dose and outcome. For Norovirus, multiple dose–response models are available that assume either a disaggregated or an aggregated intake dose. This work reviewed the dose–response models currently used in QMRA, and compared predicted risks from waterborne exposures (recreational and drinking) using all available dose–response models. The results found that the majority of published QMRAs of norovirus use the 1F1 hypergeometric dose–response model with α = 0.04, β = 0.055. This dose–response model predicted relatively high risk estimates compared to other dose–response models for doses in the range of 1–1,000 genomic equivalent copies. The difference in predicted risk among dose–response models was largest for small doses, which has implications for drinking water QMRAs where the concentration of norovirus is low. Based on the review, a set of best practices was proposed to encourage the careful consideration and reporting of important assumptions in the selection and use of dose–response models in QMRA of norovirus. Finally, in the absence of one best norovirus dose–response model, multiple models should be used to provide a range of predicted outcomes for probability of infection.  相似文献   

6.
This paper studies the relation between discrete–time and continuous–time principal–agent models. We derive the continuous–time model as a limit of discrete–time models with ever shorter periods and show that optimal incentive schemes in the discrete–time models approximate the optimal incentive scheme in the continuous model, which is linear in accounts. Under the additional assumption that the principal observes only cumulative total profits at the end and the agent can destroy profits unnoticed, an incentive scheme that is linear in total profits is shown to be approximately optimal in the discrete–time model when the length of the period is small.  相似文献   

7.
Each agent in a finite set requests an integer quantity of an idiosyncratic good; the resulting total cost must be shared among the participating agents. The Aumann–Shapley prices are given by the Shapley value of the game where each unit of each good is regarded as a distinct player. The Aumann–Shapley cost‐sharing method charges to an agent the sum of the prices attached to the units she consumes. We show that this method is characterized by the two standard axioms of Additivity and Dummy, and the property of No Merging or Splitting: agents never find it profitable to split or to merge their consumptions. We offer a variant of this result using the No Reshuffling condition: the total cost share paid by a group of agents who consume perfectly substitutable goods depends only on their aggregate consumption. We extend this characterization to the case where agents are allowed to consume bundles of goods.  相似文献   

8.
The study presents an integrated, rigorous statistical approach to define the likelihood of a threshold and point of departure (POD) based on dose–response data using nested family of bent‐hyperbola models. The family includes four models: the full bent‐hyperbola model, which allows for transition between two linear regiments with various levels of smoothness; a bent‐hyperbola model reduced to a spline model, where the transition is fixed to a knot; a bent‐hyperbola model with a restricted negative asymptote slope of zero, named hockey‐stick with arc (HS‐Arc); and spline model reduced further to a hockey‐stick type model (HS), where the first linear segment has a slope of zero. A likelihood‐ratio test is used to discriminate between the models and determine if the more flexible versions of the model provide better or significantly better fit than a hockey‐stick type model. The full bent‐hyperbola model can accommodate both threshold and nonthreshold behavior, can take on concave up and concave down shapes with various levels of curvature, can approximate the biochemically relevant Michaelis–Menten model, and even be reduced to a straight line. Therefore, with the use of this model, the presence or absence of a threshold may even become irrelevant and the best fit of the full bent‐hyperbola model be used to characterize the dose–response behavior and risk levels, with no need for mode of action (MOA) information. Point of departure (POD), characterized by exposure level at which some predetermined response is reached, can be defined using the full model or one of the better fitting reduced models.  相似文献   

9.
Some viruses cause tumor regression and can be used to treat cancer patients; these viruses are called oncolytic viruses. To assess whether oncolytic viruses from animal origin excreted by patients pose a health risk for livestock, a quantitative risk assessment (QRA) was performed to estimate the risk for the Dutch pig industry after environmental release of Seneca Valley virus (SVV). The QRA assumed SVV excretion in stool by one cancer patient on Day 1 in the Netherlands, discharge of SVV with treated wastewater into the river Meuse, downstream intake of river water for drinking water production, and consumption of this drinking water by pigs. Dose–response curves for SVV infection and clinical disease in pigs were constructed from experimental data. In the worst scenario (four log10 virus reduction by drinking water treatment and a farm with 10,000 pigs), the infection risk is less than 1% with 95% certainty. The risk of clinical disease is almost seven orders of magnitude lower. Risks may increase proportionally with the numbers of treated patients and days of virus excretion. These data indicate that application of wild‐type oncolytic animal viruses may infect susceptible livestock. A QRA regarding the use of oncolytic animal virus is, therefore, highly recommended. For this, data on excretion by patients, and dose–response parameters for infection and clinical disease in livestock, should be studied.  相似文献   

10.
Due to the changing competitive landscape, organizations must increasingly focus on acquiring external knowledge to advance new technologies. This study examines the institutionalization of knowledge transfer activities between industrial firms and university research centers. Data were collected from 189 firms collaborating with 21 university research centers in the US. Results show that knowledge transfer activities are facilitated when industrial firms have more mechanistic structures, cultures that are more stable and direction-oriented, and when the firm is more trusting of its university research center partner. Implications for both industry and universities, including their effect on firm performance, are discussed.  相似文献   

11.
Seth D. Baum 《Risk analysis》2019,39(11):2427-2442
To prevent catastrophic asteroid–Earth collisions, it has been proposed to use nuclear explosives to deflect away earthbound asteroids. However, this policy of nuclear deflection could inadvertently increase the risk of nuclear war and other violent conflict. This article conducts risk–risk tradeoff analysis to assess whether nuclear deflection results in a net increase or decrease in risk. Assuming nonnuclear deflection options are also used, nuclear deflection may only be needed for the largest and most imminent asteroid collisions. These are low‐frequency, high‐severity events. The effect of nuclear deflection on violent conflict risk is more ambiguous due to the complex and dynamic social factors at play. Indeed, it is not clear whether nuclear deflection would cause a net increase or decrease in violent conflict risk. Similarly, this article cannot reach a precise conclusion on the overall risk–risk tradeoff. The value of this article comes less from specific quantitative conclusions and more from providing an analytical framework and a better overall understanding of the policy decision. The article demonstrates the importance of integrated analysis of global risks and the policies to address them, as well as the challenge of quantitative evaluation of complex social processes such as violent conflict.  相似文献   

12.
Dose‐response models are essential to quantitative microbial risk assessment (QMRA), providing a link between levels of human exposure to pathogens and the probability of negative health outcomes. In drinking water studies, the class of semi‐mechanistic models known as single‐hit models, such as the exponential and the exact beta‐Poisson, has seen widespread use. In this work, an attempt is made to carefully develop the general mathematical single‐hit framework while explicitly accounting for variation in (1) host susceptibility and (2) pathogen infectivity. This allows a precise interpretation of the so‐called single‐hit probability and precise identification of a set of statistical independence assumptions that are sufficient to arrive at single‐hit models. Further analysis of the model framework is facilitated by formulating the single‐hit models compactly using probability generating and moment generating functions. Among the more practically relevant conclusions drawn are: (1) for any dose distribution, variation in host susceptibility always reduces the single‐hit risk compared to a constant host susceptibility (assuming equal mean susceptibilities), (2) the model‐consistent representation of complete host immunity is formally demonstrated to be a simple scaling of the response, (3) the model‐consistent expression for the total risk from repeated exposures deviates (gives lower risk) from the conventional expression used in applications, and (4) a model‐consistent expression for the mean per‐exposure dose that produces the correct total risk from repeated exposures is developed.  相似文献   

13.
Unlike the prediction of a frictionless open economy model, long‐term average savings and investment rates are highly correlated across countries—a puzzle first identified by Feldstein and Horioka (1980). We quantitatively investigate the impact of two types of financial frictions on this correlation. One is limited enforcement, where contracts are enforced by the threat of default penalties. The other is limited spanning, where the only asset available is noncontingent bonds. We find that the calibrated model with both frictions produces a savings–investment correlation and a volume of capital flows close to the data. To solve the puzzle, the limited enforcement friction needs low default penalties under which capital flows are much lower than those in the data, and the limited spanning friction needs to exogenously restrict capital flows to the observed level. When combined, the two frictions interact to endogenously restrict capital flows and thereby solve the Feldstein–Horioka puzzle.  相似文献   

14.
2008年金融危机以来的全球股市震荡,油价波动剧烈和经济的不确定性使得研究不同市场间的风险传导效应具有重要的意义。在综合评价现有研究的缺陷和既有改进方法的情况后,本文借鉴Diebold and Yilmaz (2012)的研究方法探索国际原油价格、美国经济不确定性和中国股市的波动溢出效应。本文选取1986年1月到2016年12月原油价格、美国经济不确定性指数和中国股票价格的月度数据,分别研究了静态波动溢出指数,动态波动溢出指数并做出了非线性检验。实证结果表明:变量国际油价解释了大部分的波动。方向性溢出指数是双向的和非对称的。在整个样本阶段系统的波动主要来自其他变量的冲击,变量国际油价的溢出效应占比重较大。变量国际油价、美国经济不确定性和中国股价对其他变量的波动溢出都存在非线性效应,前两者的正向变量的溢出效应较大,负向变量的溢出效应较小;后者的正向变量的溢出效应较小,负向变量的溢出效应较大。  相似文献   

15.
Microbial food safety risk assessment models can often at times be simplified by eliminating the need to integrate a complex dose‐response relationship across a distribution of exposure doses. This is possible if exposure pathways lead to pathogens at exposure that consistently have a small probability of causing illness. In this situation, the probability of illness will follow an approximately linear function of dose. Consequently, the predicted probability of illness per serving across all exposures is linear with respect to the expected value of dose. The majority of dose‐response functions are approximately linear when the dose is low. Nevertheless, what constitutes “low” is dependent on the parameters of the dose‐response function for a particular pathogen. In this study, a method is proposed to determine an upper bound of the exposure distribution for which the use of a linear dose‐response function is acceptable. If this upper bound is substantially larger than the expected value of exposure doses, then a linear approximation for probability of illness is reasonable. If conditions are appropriate for using the linear dose‐response approximation, for example, the expected value for exposure doses is two to three logs10 smaller than the upper bound of the linear portion of the dose‐response function, then predicting the risk‐reducing effectiveness of a proposed policy is trivial. Simple examples illustrate how this approximation can be used to inform policy decisions and improve an analyst's understanding of risk.  相似文献   

16.
This paper presents simple new multisignal generalizations of the two classic methods used to justify the first‐order approach to moral hazard principal–agent problems, and compares these two approaches with each other. The paper first discusses limitations of previous generalizations. Then a state‐space formulation is used to obtain a new multisignal generalization of the Jewitt (1988) conditions. Next, using the Mirrlees formulation, new multisignal generalizations of the convexity of the distribution function condition (CDFC) approach of Rogerson (1985) and Sinclair‐Desgagné (1994) are obtained. Vector calculus methods are used to derive easy‐to‐check local conditions for our generalization of the CDFC. Finally, we argue that the Jewitt conditions may generalize more flexibly than the CDFC to the multisignal case. This is because, with many signals, the principal can become very well informed about the agent's action and, even in the one‐signal case, the CDFC must fail when the signal becomes very accurate.  相似文献   

17.
Benefit–cost analysis is widely used to evaluate alternative courses of action that are designed to achieve policy objectives. Although many analyses take uncertainty into account, they typically only consider uncertainty about cost estimates and physical states of the world, whereas uncertainty about individual preferences, thus the benefit of policy intervention, is ignored. Here, we propose a strategy to integrate individual uncertainty about preferences into benefit–cost analysis using societal preference intervals, which are ranges of values over which it is unclear whether society as a whole should accept or reject an option. To illustrate the method, we use preferences for implementing a smart grid technology to sustain critical electricity demand during a 24‐hour regional power blackout on a hot summer weekend. Preferences were elicited from a convenience sample of residents in Allegheny County, Pennsylvania. This illustrative example shows that uncertainty in individual preferences, when aggregated to form societal preference intervals, can substantially change society's decision. We conclude with a discussion of where preference uncertainty comes from, how it might be reduced, and why incorporating unresolved preference uncertainty into benefit–cost analyses can be important.  相似文献   

18.
Due to the concentration of assets in disaster‐prone zones, changes in risk landscape and in the intensity of natural events, property losses have increased considerably in recent decades. While measuring these stock damages is common practice in the literature, the assessment of economic ripple effects due to business interruption is still limited and available estimates tend to vary significantly across models. This article focuses on the most popular single‐region input–output models for disaster impact evaluation. It starts with the traditional Leontief model and then compares its assumptions and results with more complex methodologies (rebalancing algorithms, the sequential interindustry model, the dynamic inoperability input–output model, and its inventory counterpart). While the estimated losses vary across models, all the figures are based on the same event, the 2007 Chehalis River flood that impacted three rural counties in Washington State. Given that the large majority of floods take place in rural areas, this article gives the practitioner a thorough review of how future events can be assessed and guidance on model selection.  相似文献   

19.
A supply chain management (SCM) system comprises many subsystems, including forecasting, order management, supplier management, procurement, production planning and control, warehousing and distribution, and product development. Demand–supply mismatches (DSMs) could indicate that some or all of these subsystems are not working as expected, creating uncertainties about the overall capabilities and effectiveness of the SCM system, which can increase firm risk. This article documents the effect of DSMs on firm risk as measured by equity volatility. Our sample consists of three different types of DSMs announced by publicly traded firms: production disruptions, excess inventory, and product introduction delays. We find that all three types of DSMs result in equity volatility increases. Over a 2‐year period around the announcement date, we observe mean abnormal equity volatility increases of 5.62% for production disruptions, 11.19% for excess inventory, and 6.28% for product introduction delays. Volatility increases associated with excess inventory are significantly higher than the increases associated with production disruptions and product introduction delays. Across all three types of DSMs, volatility changes are positively correlated with changes in information asymmetry. The results provide some support that volatility changes are also correlated with changes in financial and operating leverage.  相似文献   

20.
Research across a variety of risk domains finds that the risk perceptions of professionals and the public differ. Such risk perception gaps occur if professionals and the public understand individual risk factors differently or if they aggregate risk factors into overall risk differently. The nature of such divergences, whether based on objective inaccuracies or on differing perspectives, is important to understand. However, evidence of risk perception gaps typically pertains to general, overall risk levels; evidence of and details about mismatches between the specific level of risk faced by individuals and their perceptions of that risk is less available. We examine these issues with a paired data set of professional and resident assessments of parcel‐level wildfire risk for private property in a wildland–urban interface community located in western Colorado, United States. We find evidence of a gap between the parcel‐level risk assessments of a wildfire professional and numerous measures of residents’ risk assessments. Overall risk ratings diverge for the majority of properties, as do judgments about many specific property attributes and about the relative contribution of these attributes to a property's overall level of risk. However, overall risk gaps are not well explained by many factors commonly found to relate to risk perceptions. Understanding the nature of these risk perception gaps can facilitate improved communication by wildfire professionals about how risks can be mitigated on private lands. These results also speak to the general nature of individual‐level risk perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号