首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This study develops dose–response models for Ebolavirus using previously published data sets from the open literature. Two such articles were identified in which three different species of nonhuman primates were challenged by aerosolized Ebolavirus in order to study pathology and clinical disease progression. Dose groups were combined and pooled across each study in order to facilitate modeling. The endpoint of each experiment was death. The exponential and exact beta-Poisson models were fit to the data using maximum likelihood estimation. The exact beta-Poisson was deemed the recommended model because it more closely approximated the probability of response at low doses though both models provided a good fit. Although transmission is generally considered to be dominated by person-to-person contact, aerosolization is a possible route of exposure. If possible, this route of exposure could be particularly concerning for persons in occupational roles managing contaminated liquid wastes from patients being treated for Ebola infection and the wastewater community responsible for disinfection. Therefore, this study produces a necessary mathematical relationship between exposure dose and risk of death for the inhalation route of exposure that can support quantitative microbial risk assessment aimed at informing risk mitigation strategies including personal protection policies against occupational exposures.  相似文献   

2.
This paper considers the estimation problem of structural models for which empirical restrictions are characterized by a fixed point constraint, such as structural dynamic discrete choice models or models of dynamic games. We analyze a local condition under which the nested pseudo likelihood (NPL) algorithm converges to a consistent estimator, and derive its convergence rate. We find that the NPL algorithm may not necessarily converge to a consistent estimator when the fixed point mapping does not have a local contraction property. To address the issue of divergence, we propose alternative sequential estimation procedures that can converge to a consistent estimator even when the NPL algorithm does not.  相似文献   

3.
When a continuous‐time diffusion is observed only at discrete dates, in most cases the transition distribution and hence the likelihood function of the observations is not explicitly computable. Using Hermite polynomials, I construct an explicit sequence of closed‐form functions and show that it converges to the true (but unknown) likelihood function. I document that the approximation is very accurate and prove that maximizing the sequence results in an estimator that converges to the true maximum likelihood estimator and shares its asymptotic properties. Monte Carlo evidence reveals that this method outperforms other approximation schemes in situations relevant for financial models.  相似文献   

4.
In this study, we consider the integrated inventory replenishment and transportation operations in a supply chain where the orders placed by the downstream retailer are dispatched by the upstream warehouse via an in‐house fleet of limited size. We first consider the single‐item single‐echelon case where the retailer operates with a quantity based replenishment policy, (r,Q), and the warehouse is an ample supplier. We model the transportation operations as a queueing system and derive the operating characteristics of the system in exact terms. We extend this basic model to a two‐echelon supply chain where the warehouse employs a base‐stock policy. The departure process of the warehouse is characterized in distribution, which is then approximated by an Erlang arrival process by matching the first two moments for the analysis of the transportation queueing system. The operating characteristics and the expected cost rate are derived. An extension of this system to multiple retailers is also discussed. Numerical results are presented to illustrate the performance and the sensitivity of the models and the value of coordinating inventory and transportation operations.  相似文献   

5.
This paper proposes a horizontal collaborative approach for the wine bottling scheduling problem. The opportunities for collaboration in this problem are due to the fact that many local wine producers are usually located around the same region and that bottling is a standard process. Collaboration among wineries is modeled as a cooperative game, whose characteristic function is derived from a mixed integer linear programming model. Real world instances of the problem are, however, unlikely to be solved to optimality due to its complex combinatorial structure and large dimension. This motivates the introduction of an approximated version of the original game, where the characteristic function is computed through a heuristic procedure. Unlike the exact game, the approximated game may violate the subadditivity property. Therefore, it turns relevant not only to find a stable cost allocation but also to find a coalition structure for selecting the best partition of the set of firms. We propose a maximum entropy methodology which can address these two problems simultaneously. Numerical experiments illustrate how this approach applies, and reveal that collaboration can have important positive effects in wine bottling scheduling decreasing delay by 33.4 to 56.9% when improvement heuristic solutions are used. In contrast to the exact game in which the grand coalition is always the best outcome, in the approximated game companies may be better forming smaller coalitions. We also devise a simple procedure to repair the characteristic function of the approximated game so that it recovers the subadditivity property.  相似文献   

6.
This paper develops the fixed‐smoothing asymptotics in a two‐step generalized method of moments (GMM) framework. Under this type of asymptotics, the weighting matrix in the second‐step GMM criterion function converges weakly to a random matrix and the two‐step GMM estimator is asymptotically mixed normal. Nevertheless, the Wald statistic, the GMM criterion function statistic, and the Lagrange multiplier statistic remain asymptotically pivotal. It is shown that critical values from the fixed‐smoothing asymptotic distribution are high order correct under the conventional increasing‐smoothing asymptotics. When an orthonormal series covariance estimator is used, the critical values can be approximated very well by the quantiles of a noncentral F distribution. A simulation study shows that statistical tests based on the new fixed‐smoothing approximation are much more accurate in size than existing tests.  相似文献   

7.
This article investigates how accurately experts (underwriters) and lay persons (university students) judge the risks posed by life-threatening events Only one prior study (Slovic, Fischhoff, & Lichtenstein, 1985) has previously investigated the veracity of expert versus lay judgments of the magnitude of risk. In that study, a heterogeneous grouping of 15 experts was found to judge, using marginal estimations, a variety of risks as closer to the true annual frequencies of death than convenience samples of the lay population. In this study, we use a larger, homogenous sample of experts performing an ecologically valid task. We also ask our respondents to assess frequencies and relative frequencies directly, rather than ask for a "risk" estimate--a response mode subject to possible qualitative attributions-as was done in the Slovic et al. study. Although we find that the experts outperformed lay persons on a number of measures, the differences are small, and both groups showed similar global biases in terms of: (1) overestimating the likelihood of dying from a condition (marginal probability) and of dying from a condition given that it happens to you (conditional probability), and (2) underestimating the ratios of marginal and conditional likelihoods between pairs of potentially lethal events. In spite of these scaling problems, both groups showed quite good performance in ordering the lethal events in terms of marginal and conditional likelihoods. We discuss the nature of expertise using a framework developed by Bolger and Wright (1994), and consider whether the commonsense assumption of the superiority of expert risk assessors in making magnitude judgments of risk is, in fact, sensible.  相似文献   

8.
The goal of Emergency Medical Service (EMS) systems is to provide rapid response to emergency calls in order to save lives. This paper proposes a relocation strategy to improve the performance of EMS systems. In practice, EMS systems often use a compliance table to relocate ambulances. A compliance table specifies ambulance base stations as a function of the state of the system. We consider a nested-compliance table, which restricts the number of relocations that can occur simultaneously. We formulate the nested-compliance table model as an integer programming model in order to maximize expected coverage. We determine an optimal nested-compliance table policy using steady state probabilities of a Markov chain model with relocation as input parameters. These parameter approximations are independent of the exact compliance table used. We assume that there is a single type of medical unit, single call priority, and no patient queue. We validate the model by applying the nested-compliance table policies in a simulated system using real-world data. The numerical results show the benefit of our model over a static policy based on the adjusted maximum expected covering location problem (AMEXCLP).  相似文献   

9.
GARCH models are commonly used as latent processes in econometrics, financial economics, and macroeconomics. Yet no exact likelihood analysis of these models has been provided so far. In this paper we outline the issues and suggest a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a Bayesian solution in O(T) computational operations, where T denotes the sample size. We assess the performance of our proposed algorithm in the context of both artificial examples and an empirical application to 26 UK sectorial stock returns, and compare it to existing approximate solutions.  相似文献   

10.

In this paper, we have studied analytically the implication of a controllable lead-time and a random supplier capacity on the continuous review inventory policy, in which the order quantity, reorder point and lead-time are decision variables. Two models are considered: the normal lead-time demand and lead-time demand is distributed free. For both cases, after formulating the general model, some properties of the optimal ordering policy have been developed. Particularly, we have shown that the expected annual total cost is a unimodal function and quasi-convex in the order quantity. When the variable capacity distribution is exponential, we develop effective procedures for finding the optimal solutions. Furthermore, the effects of parameters are also performed.  相似文献   

11.
This paper develops a method for inference in dynamic discrete choice models with serially correlated unobserved state variables. Estimation of these models involves computing high‐dimensional integrals that are present in the solution to the dynamic program and in the likelihood function. First, the paper proposes a Bayesian Markov chain Monte Carlo estimation procedure that can handle the problem of multidimensional integration in the likelihood function. Second, the paper presents an efficient algorithm for solving the dynamic program suitable for use in conjunction with the proposed estimation procedure.  相似文献   

12.
For stationary time series models with serial correlation, we consider generalized method of moments (GMM) estimators that use heteroskedasticity and autocorrelation consistent (HAC) positive definite weight matrices and generalized empirical likelihood (GEL) estimators based on smoothed moment conditions. Following the analysis of Newey and Smith (2004) for independent observations, we derive second order asymptotic biases of these estimators. The inspection of bias expressions reveals that the use of smoothed GEL, in contrast to GMM, removes the bias component associated with the correlation between the moment function and its derivative, while the bias component associated with third moments depends on the employed kernel function. We also analyze the case of no serial correlation, and find that the seemingly unnecessary smoothing and HAC estimation can reduce the bias for some of the estimators.  相似文献   

13.
Ula? Özen  Mustafa K. Do?ru 《Omega》2012,40(3):348-357
We consider a single-stage inventory system facing non-stationary stochastic demand of the customers in a finite planning horizon. Motivated by the practice, the replenishment times need to be determined and frozen once and for all at the beginning of the horizon while decisions on the exact replenishment quantities can be deferred until the replenishment time. This operating scheme is refereed to as a “static-dynamic uncertainty” strategy in the literature [3]. We consider dynamic fixed-ordering and linear end-of-period holding costs, as well as dynamic penalty costs, or service levels. We prove that the optimal ordering policy is a base stock policy for both penalty cost and service level constrained models. Since an exponential exhaustive search based on dynamic programming yields the optimal ordering periods and the associated base stock levels, it is not possible to compute the optimal policy parameters for longer planning horizons. Thus, we develop two heuristics. Numerical experiments show that both heuristics perform well in terms of solution quality and scale-up efficiently; hence, any practically relevant large instance can be solved in reasonable time. Finally, we discuss how our results and heuristics can be extended to handle capacity limitations and minimum order quantity considerations.  相似文献   

14.
This article describes a simple model for quantifying the health impacts of toxic metal emissions. In contrast to most traditional models it calculates the expectation value of the total damage (summed over the total population and over all time) for typical emission sites, rather than "worst-case" estimates for specific sites or episodes. Such a model is needed for the evaluation of many environmental policy measures, e.g., the optimal level of pollution taxes or emission limits. Based on the methodology that has been developed by USEPA for the assessment of multimedia pathways, the equations and parameters are assembled for the assessment of As, Cd, Cr, Hg, Ni, and Pb, and some typical results are presented (the dose from seafood is not included and for Hg the results are extremely uncertain); the model is freely available on the web. The structure of the model is very simple because, as we show, if the parameters can be approximated by time-independent constants (the case for the USEPA methodology), the total impacts can be calculated with steady-state models even though the environment is never in steady state. The collective ingestion dose is found to be roughly 2 orders of magnitude larger than the collective dose via inhalation. The uncertainties are large, easily an order of magnitude, the main uncertainties arising from the parameter values of the model, in particular the transfer factors. Using linearized dose-response functions, estimates are provided for cancers due to As, Cd, Cr, and Ni as well as IQ loss due to Pb emissions in Europe.  相似文献   

15.
A popular way to account for unobserved heterogeneity is to assume that the data are drawn from a finite mixture distribution. A barrier to using finite mixture models is that parameters that could previously be estimated in stages must now be estimated jointly: using mixture distributions destroys any additive separability of the log‐likelihood function. We show, however, that an extension of the EM algorithm reintroduces additive separability, thus allowing one to estimate parameters sequentially during each maximization step. In establishing this result, we develop a broad class of estimators for mixture models. Returning to the likelihood problem, we show that, relative to full information maximum likelihood, our sequential estimator can generate large computational savings with little loss of efficiency.  相似文献   

16.
We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross‐sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured.  相似文献   

17.
National policy initiatives require the expenditure of large amounts of resources over several years. It is common for these initiatives to generate large amounts of data that are needed in order to assess their success. Educational policies are an obvious example. Here we concentrate on Mexico׳s “Educational Modernisation Programme” and try to see how this plan has affected efficiency in teaching and research at Mexico׳s universities. We use a combined approach that includes traditional ratios together with Data Envelopment Analysis models. This mixture allows us to assess changes in efficiency at each individual university and explore if these changes are related to teaching, to research, or to both. Using official statistics for 55 universities over a six year period (2007–2012), we have generated 12 ratios and estimated 21 DEA models under different definitions of efficiency. In order to make the results of the analysis accessible to the non-specialist we use models that visualise the main characteristics of the data, in particular scaling models of multivariate statistical analysis. Scaling models highlight the important aspects of the information contained in the data. Because the data is three-way (variables, universities, and years) we have chosen the Individual Differences Scaling model of Carroll and Chang. We complete the paper with a discussion of efficiency evolution in three universities.  相似文献   

18.
In an effort to improve the small sample properties of generalized method of moments (GMM) estimators, a number of alternative estimators have been suggested. These include empirical likelihood (EL), continuous updating, and exponential tilting estimators. We show that these estimators share a common structure, being members of a class of generalized empirical likelihood (GEL) estimators. We use this structure to compare their higher order asymptotic properties. We find that GEL has no asymptotic bias due to correlation of the moment functions with their Jacobian, eliminating an important source of bias for GMM in models with endogeneity. We also find that EL has no asymptotic bias from estimating the optimal weight matrix, eliminating a further important source of bias for GMM in panel data models. We give bias corrected GMM and GEL estimators. We also show that bias corrected EL inherits the higher order property of maximum likelihood, that it is higher order asymptotically efficient relative to the other bias corrected estimators.  相似文献   

19.
The threat of so‐called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to estimate likelihoods? This article reviews the societal demand for the likelihoods of rapid or abrupt climate change, and different methods for estimating likelihoods: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to estimate the likelihoods of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low likelihoods to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations.  相似文献   

20.
Large Contests     
We consider contests with many, possibly heterogeneous, players and prizes, and show that the equilibrium outcomes of such contests are approximated by the outcomes of mechanisms that implement the assortative allocation in an environment with a single agent that has a continuum of possible types. This makes it possible to easily approximate the equilibria of contests whose exact equilibrium characterization is complicated, as well as the equilibria of contests for which there is no existing equilibrium characterization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号