共查询到20条相似文献,搜索用时 15 毫秒
1.
S. Hadi Khazraee Antonio Jose Sáez‐Castillo Srinivas Reddy Geedipally Dominique Lord 《Risk analysis》2015,35(5):919-930
The hyper‐Poisson distribution can handle both over‐ and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation‐specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper‐Poisson distribution in analyzing motor vehicle crash count data. The hyper‐Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway‐highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness‐of‐fit measures indicated that the hyper‐Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper‐Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway‐Maxwell‐Poisson model previously developed for the same data set. The advantages of the hyper‐Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper‐Poisson model can handle both over‐ and underdispersed crash data. Although not a major issue for the Conway‐Maxwell‐Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. 相似文献
2.
Royce A. Francis Srinivas Reddy Geedipally Seth D. Guikema Soma Sekhar Dhavala Dominique Lord Sarah LaRocca 《Risk analysis》2012,32(1):167-183
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway‐Maxwell Poisson (COM‐Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM‐Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM‐Poisson GLM, and (2) estimate the prediction accuracy of the COM‐Poisson GLM using simulated data sets. The results of the study indicate that the COM‐Poisson GLM is flexible enough to model under‐, equi‐, and overdispersed data sets with different sample mean values. The results also show that the COM‐Poisson GLM yields accurate parameter estimates. The COM‐Poisson GLM provides a promising and flexible approach for performing count data regression. 相似文献
3.
Longitudinal data are important in exposure and risk assessments, especially for pollutants with long half‐lives in the human body and where chronic exposures to current levels in the environment raise concerns for human health effects. It is usually difficult and expensive to obtain large longitudinal data sets for human exposure studies. This article reports a new simulation method to generate longitudinal data with flexible numbers of subjects and days. Mixed models are used to describe the variance‐covariance structures of input longitudinal data. Based on estimated model parameters, simulation data are generated with similar statistical characteristics compared to the input data. Three criteria are used to determine similarity: the overall mean and standard deviation, the variance components percentages, and the average autocorrelation coefficients. Upon the discussion of mixed models, a simulation procedure is produced and numerical results are shown through one human exposure study. Simulations of three sets of exposure data successfully meet above criteria. In particular, simulations can always retain correct weights of inter‐ and intrasubject variances as in the input data. Autocorrelations are also well followed. Compared with other simulation algorithms, this new method stores more information about the input overall distribution so as to satisfy the above multiple criteria for statistical targets. In addition, it generates values from numerous data sources and simulates continuous observed variables better than current data methods. This new method also provides flexible options in both modeling and simulation procedures according to various user requirements. 相似文献
4.
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson‐distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional “single‐hit” dose‐response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose‐response models in terms of probability generating functions. It is shown formally that the theoretical single‐hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single‐hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single‐hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose‐response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose‐response assessment as well as practical risk characterization are discussed. 相似文献
5.
Risk factor selection is very important in the insurance industry, which helps precise rate making and studying the features of high‐quality insureds. Zero‐inflated data are common in insurance, such as the claim frequency data, and zero‐inflation makes the selection of risk factors quite difficult. In this article, we propose a new risk factor selection approach, EM adaptive LASSO, for a zero‐inflated Poisson regression model, which combines the EM algorithm and adaptive LASSO penalty. Under some regularity conditions, we show that, with probability approaching 1, important factors are selected and the redundant factors are excluded. We investigate the finite sample performance of the proposed method through a simulation study and the analysis of car insurance data from SAS Enterprise Miner database. 相似文献
6.
Gang Xie Anne Roiko Helen Stratton Charles Lemckert Peter K. Dunn Kerrie Mengersen 《Risk analysis》2017,37(7):1388-1402
For dose–response analysis in quantitative microbial risk assessment (QMRA), the exact beta‐Poisson model is a two‐parameter mechanistic dose–response model with parameters and , which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting as the probability of infection at a given mean dose d, the widely used dose–response model is an approximate formula for the exact beta‐Poisson model. Notwithstanding the required conditions and , issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | , ) as a validity measure (r is a random variable that follows a gamma distribution; and are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions for as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | , ) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta‐Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | , ), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta‐Poisson model dose–response curve. 相似文献
7.
Ultra‐high‐frequency data is defined to be a full record of transactions and their associated characteristics. The transaction arrival times and accompanying measures can be analyzed as marked point processes. The ACD point process developed by Engle and Russell (1998) is applied to IBM transactions arrival times to develop semiparametric hazard estimates and conditional intensities. Combining these intensities with a GARCH model of prices produces ultra‐high‐frequency measures of volatility. Both returns and variances are found to be negatively influenced by long durations as suggested by asymmetric information models of market micro‐structure. 相似文献
8.
Torben G. Andersen Tim Bollerslev Nour Meddahi 《Econometrica : journal of the Econometric Society》2005,73(1):279-296
We develop general model‐free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit recent nonparametric asymptotic distributional results, are both easy‐to‐implement and highly accurate in empirically realistic situations. We also illustrate that properly accounting for the measurement errors in the volatility forecast evaluations reported in the existing literature can result in markedly higher estimates for the true degree of return volatility predictability. 相似文献
9.
Wolfgang Barth Michael Manitz Raik Stolletz 《Production and Operations Management》2010,19(6):757-768
In this paper, we analyze the performance of call centers of financial service providers with two levels of support and a time‐dependent overflow mechanism. Waiting calls from the front‐office queue flow over to the back office if a waiting‐time limit is reached and at least one back‐office agent is available. The analysis of such a system with time‐dependent overflow is reduced to the analysis of a continuous‐time Markov chain with state‐dependent overflow probabilities. To approximate the system with time‐dependent overflow, some waiting‐based performance measures are modified. Numerical results demonstrate the reliability of this Markovian performance approximation for different parameter settings. A sensitivity analysis shows the impact of the waiting‐time limit and the dependence of the performance measures on the arrival rate. 相似文献
10.
K. Hoelzer Y. Chen S. Dennis P. Evans R. Pouillot B. J. Silk I. Walls 《Risk analysis》2013,33(9):1568-1581
Listeria monocytogenes is a leading cause of hospitalization, fetal loss, and death due to foodborne illnesses in the United States. A quantitative assessment of the relative risk of listeriosis associated with the consumption of 23 selected categories of ready‐to‐eat foods, published by the U.S. Department of Health and Human Services and the U.S. Department of Agriculture in 2003, has been instrumental in identifying the food products and practices that pose the greatest listeriosis risk and has guided the evaluation of potential intervention strategies. Dose‐response models, which quantify the relationship between an exposure dose and the probability of adverse health outcomes, were essential components of the risk assessment. However, because of data gaps and limitations in the available data and modeling approaches, considerable uncertainty existed. Since publication of the risk assessment, new data have become available for modeling L. monocytogenes dose‐response. At the same time, recent advances in the understanding of L. monocytogenes pathophysiology and strain diversity have warranted a critical reevaluation of the published dose‐response models. To discuss strategies for modeling L. monocytogenes dose‐response, the Interagency Risk Assessment Consortium (IRAC) and the Joint Institute for Food Safety and Applied Nutrition (JIFSAN) held a scientific workshop in 2011 (details available at http://foodrisk.org/irac/events/ ). The main findings of the workshop and the most current and relevant data identified during the workshop are summarized and presented in the context of L. monocytogenes dose‐response. This article also discusses new insights on dose‐response modeling for L. monocytogenes and research opportunities to meet future needs. 相似文献
11.
Philip J. Schmidt Katarina D. M. Pintar Aamir M. Fazil Edward Topp 《Risk analysis》2013,33(9):1677-1693
Dose‐response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose‐response model parameters are estimated using limited epidemiological data is rarely quantified. Second‐order risk characterization approaches incorporating uncertainty in dose‐response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta‐Poisson dose‐response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta‐Poisson dose‐response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta‐Poisson dose‐response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta‐Poisson model are proposed, and simple algorithms to evaluate actual beta‐Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta‐Poisson dose‐response model parameters is attributable to the absence of low‐dose data. This region includes beta‐Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. 相似文献
12.
Several emerging studies have focused on the pricing issue of bandwidth sharing between Wi‐Fi and WiMAX networks; however, most either concentrate on the design of collaborated protocols or figure out the issue without the overall consideration of consumer preferences and contract design. In this study, we explore a wireless service market in which there are two wireless service providers operating Wi‐Fi and WiMAX. One of the research dimensions given in this study is whether wireless service providers implement bandwidth sharing, while the other is whether they make decisions individually or jointly. By involving consumer preferences and a wholesale price contract in the present model, we find that bandwidth sharing would benefit a WiMAX service provider, yet a Wi‐Fi service provider would make no significant savings under a wholesale price contract. In addition, the profit of a WiMAX service provider may increase with Wi‐Fi coverage when bandwidth sharing has been implemented but decrease with Wi‐Fi coverage when both wireless services operate without bandwidth sharing. Furthermore, the WiMAX service provider allocates more capacity when the average usage rate increases, but lowers the expenditure of capacity when the average usage rate is too high. 相似文献
13.
Lauren M. Gardner David Rey Anita E. Heywood Renin Toms James Wood S. Travis Waller C. Raina MacIntyre 《Risk analysis》2014,34(8):1391-1400
Between April 2012 and June 2014, 820 laboratory‐confirmed cases of the Middle East respiratory syndrome coronavirus (MERS‐CoV) have been reported in the Arabian Peninsula, Europe, North Africa, Southeast Asia, the Middle East, and the United States. The observed epidemiology is different to SARS, which showed a classic epidemic curve and was over in eight months. The much longer persistence of MERS‐CoV in the population, with a lower reproductive number, some evidence of human‐to‐human transmission but an otherwise sporadic pattern, is difficult to explain. Using available epidemiological data, we implemented mathematical models to explore the transmission dynamics of MERS‐CoV in the context of mass gatherings such as the Hajj pilgrimage, and found a discrepancy between the observed and expected epidemiology. The fact that no epidemic occurred in returning Hajj pilgrims in either 2012 or 2013 contradicts the long persistence of the virus in human populations. The explanations for this discrepancy include an ongoing, repeated nonhuman/sporadic source, a large proportion of undetected or unreported human‐to‐human cases, or a combination of the two. Furthermore, MERS‐CoV is occurring in a region that is a major global transport hub and hosts significant mass gatherings, making it imperative to understand the source and means of the yet unexplained and puzzling ongoing persistence of the virus in the human population. 相似文献
14.
Gregory Connor Matthias Hagmann Oliver Linton 《Econometrica : journal of the Econometric Society》2012,80(2):713-754
This paper develops a new estimation procedure for characteristic‐based factor models of stock returns. We treat the factor model as a weighted additive nonparametric regression model, with the factor returns serving as time‐varying weights and a set of univariate nonparametric functions relating security characteristic to the associated factor betas. We use a time‐series and cross‐sectional pooled weighted additive nonparametric regression methodology to simultaneously estimate the factor returns and characteristic‐beta functions. By avoiding the curse of dimensionality, our methodology allows for a larger number of factors than existing semiparametric methods. We apply the technique to the three‐factor Fama–French model, Carhart's four‐factor extension of it that adds a momentum factor, and a five‐factor extension that adds an own‐volatility factor. We find that momentum and own‐volatility factors are at least as important, if not more important, than size and value in explaining equity return comovements. We test the multifactor beta pricing theory against a general alternative using a new nonparametric test. 相似文献
15.
To date, the variant Creutzfeldt‐Jakob disease (vCJD) risk assessments that have been performed have primarily focused on predicting future vCJD cases in the United Kingdom, which underwent a bovine spongiform encephalopathy (BSE) epidemic between 1980 and 1996. Surveillance of potential BSE cases was also used to assess vCJD risk, especially in other BSE‐prevalent EU countries. However, little is known about the vCJD risk for uninfected individuals who accidentally consume BSE‐contaminated meat products in or imported from a country with prevalent BSE. In this article, taking into account the biological mechanism of abnormal prion PrPres aggregation in the brain, the probability of exposure, and the expected amount of ingested infectivity, we establish a stochastic mean exponential growth model of lifetime exposure through dietary intake. Given the findings that BSE agents behave similarly in humans and macaques, we obtained parameter estimates from experimental macaque data. We then estimated the accumulation of abnormal prions to assess lifetime risk of developing clinical signs of vCJD. Based on the observed number of vCJD cases and the estimated number of exposed individuals during the BSE epidemic period from 1980 to 1996 in the United Kingdom, an exposure threshold hypothesis is proposed. Given the age‐specific risk of infection, the hypothesis explains the observations very well from an extreme‐value distribution fitting of the estimated BSE infectivity exposure. The current BSE statistics in the United Kingdom are provided as an example. 相似文献
16.
Joy M. Field Gregory R. Heim Kingshuk K. Sinha 《Production and Operations Management》2004,13(4):291-306
In this paper, we develop a process model for assessing and managing e‐service quality based on the underlying components of the e‐service system and, in turn, address the growing need to look in more detail at the system component level for sources of poor quality. The proposed process model is comprised of a set of entities representing the e‐service system, a network defining the linking between all pairs of entities via transactions and product flows, and a set of outcomes of the processes in terms of quality dimensions. The process model is developed using Unified Modeling Language (UML), a pictorial language for specifying service designs that has achieved widespread acceptance among e‐service designers. Examples of applications of the process model are presented to illustrate how the model can be use to identify operational levers for managing and improving e‐service quality. 相似文献
17.
Miao Guo Abhinav Mishra Robert L. Buchanan Jitender P. Dubey Dolores E. Hill H. Ray Gamble Jeffrey L. Jones Xianzhi Du Abani K. Pradhan 《Risk analysis》2016,36(5):926-938
Toxoplasma gondii is a protozoan parasite that is responsible for approximately 24% of deaths attributed to foodborne pathogens in the United States. It is thought that a substantial portion of human T. gondii infections is acquired through the consumption of meats. The dose‐response relationship for human exposures to T. gondii‐infected meat is unknown because no human data are available. The goal of this study was to develop and validate dose‐response models based on animal studies, and to compute scaling factors so that animal‐derived models can predict T. gondii infection in humans. Relevant studies in literature were collected and appropriate studies were selected based on animal species, stage, genotype of T. gondii, and route of infection. Data were pooled and fitted to four sigmoidal‐shaped mathematical models, and model parameters were estimated using maximum likelihood estimation. Data from a mouse study were selected to develop the dose‐response relationship. Exponential and beta‐Poisson models, which predicted similar responses, were selected as reasonable dose‐response models based on their simplicity, biological plausibility, and goodness fit. A confidence interval of the parameter was determined by constructing 10,000 bootstrap samples. Scaling factors were computed by matching the predicted infection cases with the epidemiological data. Mouse‐derived models were validated against data for the dose‐infection relationship in rats. A human dose‐response model was developed as P (d) = 1–exp (–0.0015 × 0.005 × d) or P (d) = 1–(1 + d × 0.003 / 582.414)?1.479. Both models predict the human response after consuming T. gondii‐infected meats, and provide an enhanced risk characterization in a quantitative microbial risk assessment model for this pathogen. 相似文献
18.
The estimated cost of fire in the United States is about $329 billion a year, yet there are gaps in the literature to measure the effectiveness of investment and to allocate resources optimally in fire protection. This article fills these gaps by creating data‐driven empirical and theoretical models to study the effectiveness of nationwide fire protection investment in reducing economic and human losses. The regression between investment and loss vulnerability shows high R2 values (≈0.93). This article also contributes to the literature by modeling strategic (national‐level or state‐level) resource allocation (RA) for fire protection with equity‐efficiency trade‐off considerations, while existing literature focuses on operational‐level RA. This model and its numerical analyses provide techniques and insights to aid the strategic decision‐making process. The results from this model are used to calculate fire risk scores for various geographic regions, which can be used as an indicator of fire risk. A case study of federal fire grant allocation is used to validate and show the utility of the optimal RA model. The results also identify potential underinvestment and overinvestment in fire protection in certain regions. This article presents scenarios in which the model presented outperforms the existing RA scheme, when compared in terms of the correlation of resources allocated with actual number of fire incidents. This article provides some novel insights to policymakers and analysts in fire protection and safety that would help in mitigating economic costs and saving lives. 相似文献
19.
Emerging diseases (ED) can have devastating effects on agriculture. Consequently, agricultural insurance for ED can develop if basic insurability criteria are met, including the capability to estimate the severity of ED outbreaks with associated uncertainty. The U.S. farm‐raised channel catfish (Ictalurus punctatus) industry was used to evaluate the feasibility of using a disease spread simulation modeling framework to estimate the potential losses from new ED for agricultural insurance purposes. Two stochastic models were used to simulate the spread of ED between and within channel catfish ponds in Mississippi (MS) under high, medium, and low disease impact scenarios. The mean (95% prediction interval (PI)) proportion of ponds infected within disease‐impacted farms was 7.6% (3.8%, 22.8%), 24.5% (3.8%, 72.0%), and 45.6% (4.0%, 92.3%), and the mean (95% PI) proportion of fish mortalities in ponds affected by the disease was 9.8% (1.4%, 26.7%), 49.2% (4.7%, 60.7%), and 88.3% (85.9%, 90.5%) for the low, medium, and high impact scenarios, respectively. The farm‐level mortality losses from an ED were up to 40.3% of the total farm inventory and can be used for insurance premium rate development. Disease spread modeling provides a systematic way to organize the current knowledge on the ED perils and, ultimately, use this information to help develop actuarially sound agricultural insurance policies and premiums. However, the estimates obtained will include a large amount of uncertainty driven by the stochastic nature of disease outbreaks, by the uncertainty in the frequency of future ED occurrences, and by the often sparse data available from past outbreaks. 相似文献
20.
Drawing on the resource‐based view, we propose a configurational perspective of how information technology (IT) assets and capabilities affect firm performance. Our premise is that IT assets and IT managerial capabilities are components in organizational design, and as such, their impact can only be understood by taking into consideration the interactions between those IT assets and capabilities and other non‐IT components. We develop and test a model that assesses the impact of explicit and tacit IT resources by examining their interactions with two non‐IT resources (open communication and business work practices). Our analysis of data collected from a sample of firms in the third‐party logistics industry supports the proposed configurational perspective, showing that IT resources can either enhance (complement) or suppress (by substituting for) the effects of non‐IT resources on process performance. More specifically, we find evidence of complementarities between shared business–IT knowledge and business work practice and between the scope of IT applications and an open communication culture in affecting the performance of the customer‐service process; but there is evidence of substitutability between shared knowledge and open communications. For decision making, our results reinforce the need to account for all dimensions of possible interaction between IT and non‐IT resources when evaluating IT investments. 相似文献